2025-11-08 12:58:12.939115 | Job console starting 2025-11-08 12:58:12.948845 | Updating git repos 2025-11-08 12:58:13.011788 | Cloning repos into workspace 2025-11-08 12:58:13.233433 | Restoring repo states 2025-11-08 12:58:13.252708 | Merging changes 2025-11-08 12:58:13.252726 | Checking out repos 2025-11-08 12:58:13.638494 | Preparing playbooks 2025-11-08 12:58:14.256095 | Running Ansible setup 2025-11-08 12:58:18.421032 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-11-08 12:58:19.141628 | 2025-11-08 12:58:19.141782 | PLAY [Base pre] 2025-11-08 12:58:19.158462 | 2025-11-08 12:58:19.158612 | TASK [Setup log path fact] 2025-11-08 12:58:19.188269 | orchestrator | ok 2025-11-08 12:58:19.205274 | 2025-11-08 12:58:19.205411 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-11-08 12:58:19.246068 | orchestrator | ok 2025-11-08 12:58:19.258809 | 2025-11-08 12:58:19.258952 | TASK [emit-job-header : Print job information] 2025-11-08 12:58:19.304985 | # Job Information 2025-11-08 12:58:19.305259 | Ansible Version: 2.16.14 2025-11-08 12:58:19.305320 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-11-08 12:58:19.305381 | Pipeline: post 2025-11-08 12:58:19.305421 | Executor: 521e9411259a 2025-11-08 12:58:19.305458 | Triggered by: https://github.com/osism/testbed/commit/4ff9cbdfcc9550d96d41dece3c640166574a3263 2025-11-08 12:58:19.305496 | Event ID: 90fea182-bca2-11f0-801e-f69f41872d72 2025-11-08 12:58:19.315409 | 2025-11-08 12:58:19.315538 | LOOP [emit-job-header : Print node information] 2025-11-08 12:58:19.426375 | orchestrator | ok: 2025-11-08 12:58:19.426595 | orchestrator | # Node Information 2025-11-08 12:58:19.426630 | orchestrator | Inventory Hostname: orchestrator 2025-11-08 12:58:19.426655 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-11-08 12:58:19.426677 | orchestrator | Username: zuul-testbed05 2025-11-08 12:58:19.426697 | orchestrator | Distro: Debian 12.12 2025-11-08 12:58:19.426720 | orchestrator | Provider: static-testbed 2025-11-08 12:58:19.426741 | orchestrator | Region: 2025-11-08 12:58:19.426762 | orchestrator | Label: testbed-orchestrator 2025-11-08 12:58:19.426781 | orchestrator | Product Name: OpenStack Nova 2025-11-08 12:58:19.426801 | orchestrator | Interface IP: 81.163.193.140 2025-11-08 12:58:19.457093 | 2025-11-08 12:58:19.457264 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-11-08 12:58:19.916955 | orchestrator -> localhost | changed 2025-11-08 12:58:19.925124 | 2025-11-08 12:58:19.925247 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-11-08 12:58:20.914276 | orchestrator -> localhost | changed 2025-11-08 12:58:20.935979 | 2025-11-08 12:58:20.936134 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-11-08 12:58:21.212450 | orchestrator -> localhost | ok 2025-11-08 12:58:21.219639 | 2025-11-08 12:58:21.219758 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-11-08 12:58:21.248858 | orchestrator | ok 2025-11-08 12:58:21.265810 | orchestrator | included: /var/lib/zuul/builds/0d6f5d21fae74aeb8ef4d65207790d8f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-11-08 12:58:21.273785 | 2025-11-08 12:58:21.273882 | TASK [add-build-sshkey : Create Temp SSH key] 2025-11-08 12:58:22.238105 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-11-08 12:58:22.238586 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0d6f5d21fae74aeb8ef4d65207790d8f/work/0d6f5d21fae74aeb8ef4d65207790d8f_id_rsa 2025-11-08 12:58:22.238691 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0d6f5d21fae74aeb8ef4d65207790d8f/work/0d6f5d21fae74aeb8ef4d65207790d8f_id_rsa.pub 2025-11-08 12:58:22.238761 | orchestrator -> localhost | The key fingerprint is: 2025-11-08 12:58:22.238828 | orchestrator -> localhost | SHA256:cadk+y8IPvFbw5EjZSwr3z3JPSVPOQq8VldmFXfAo4M zuul-build-sshkey 2025-11-08 12:58:22.239011 | orchestrator -> localhost | The key's randomart image is: 2025-11-08 12:58:22.239089 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-11-08 12:58:22.239145 | orchestrator -> localhost | | ..o=| 2025-11-08 12:58:22.239200 | orchestrator -> localhost | | . o +| 2025-11-08 12:58:22.239252 | orchestrator -> localhost | | . =.=. .+| 2025-11-08 12:58:22.239303 | orchestrator -> localhost | | =EOo. oo| 2025-11-08 12:58:22.239353 | orchestrator -> localhost | | S =o+o.+o| 2025-11-08 12:58:22.239412 | orchestrator -> localhost | | oo =+=o*o| 2025-11-08 12:58:22.239466 | orchestrator -> localhost | | . +.+*.=.o| 2025-11-08 12:58:22.239518 | orchestrator -> localhost | | o +..o ..| 2025-11-08 12:58:22.239632 | orchestrator -> localhost | | ... .. | 2025-11-08 12:58:22.239696 | orchestrator -> localhost | +----[SHA256]-----+ 2025-11-08 12:58:22.239815 | orchestrator -> localhost | ok: Runtime: 0:00:00.463663 2025-11-08 12:58:22.252974 | 2025-11-08 12:58:22.253104 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-11-08 12:58:22.288675 | orchestrator | ok 2025-11-08 12:58:22.303270 | orchestrator | included: /var/lib/zuul/builds/0d6f5d21fae74aeb8ef4d65207790d8f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-11-08 12:58:22.312817 | 2025-11-08 12:58:22.312918 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-11-08 12:58:22.336442 | orchestrator | skipping: Conditional result was False 2025-11-08 12:58:22.344113 | 2025-11-08 12:58:22.344216 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-11-08 12:58:22.929419 | orchestrator | changed 2025-11-08 12:58:22.938286 | 2025-11-08 12:58:22.938410 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-11-08 12:58:23.218113 | orchestrator | ok 2025-11-08 12:58:23.227345 | 2025-11-08 12:58:23.227469 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-11-08 12:58:23.641045 | orchestrator | ok 2025-11-08 12:58:23.649499 | 2025-11-08 12:58:23.649643 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-11-08 12:58:24.050937 | orchestrator | ok 2025-11-08 12:58:24.059561 | 2025-11-08 12:58:24.059710 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-11-08 12:58:24.084486 | orchestrator | skipping: Conditional result was False 2025-11-08 12:58:24.098544 | 2025-11-08 12:58:24.098743 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-11-08 12:58:24.541967 | orchestrator -> localhost | changed 2025-11-08 12:58:24.555922 | 2025-11-08 12:58:24.556044 | TASK [add-build-sshkey : Add back temp key] 2025-11-08 12:58:24.870434 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0d6f5d21fae74aeb8ef4d65207790d8f/work/0d6f5d21fae74aeb8ef4d65207790d8f_id_rsa (zuul-build-sshkey) 2025-11-08 12:58:24.870801 | orchestrator -> localhost | ok: Runtime: 0:00:00.018068 2025-11-08 12:58:24.883840 | 2025-11-08 12:58:24.884816 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-11-08 12:58:25.329230 | orchestrator | ok 2025-11-08 12:58:25.337908 | 2025-11-08 12:58:25.338034 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-11-08 12:58:25.361866 | orchestrator | skipping: Conditional result was False 2025-11-08 12:58:25.423272 | 2025-11-08 12:58:25.423439 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-11-08 12:58:25.800157 | orchestrator | ok 2025-11-08 12:58:25.816436 | 2025-11-08 12:58:25.816623 | TASK [validate-host : Define zuul_info_dir fact] 2025-11-08 12:58:25.862654 | orchestrator | ok 2025-11-08 12:58:25.872831 | 2025-11-08 12:58:25.872949 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-11-08 12:58:26.190405 | orchestrator -> localhost | ok 2025-11-08 12:58:26.198016 | 2025-11-08 12:58:26.198126 | TASK [validate-host : Collect information about the host] 2025-11-08 12:58:27.387251 | orchestrator | ok 2025-11-08 12:58:27.401749 | 2025-11-08 12:58:27.401863 | TASK [validate-host : Sanitize hostname] 2025-11-08 12:58:27.456438 | orchestrator | ok 2025-11-08 12:58:27.465809 | 2025-11-08 12:58:27.465934 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-11-08 12:58:27.995318 | orchestrator -> localhost | changed 2025-11-08 12:58:28.001886 | 2025-11-08 12:58:28.001998 | TASK [validate-host : Collect information about zuul worker] 2025-11-08 12:58:28.430918 | orchestrator | ok 2025-11-08 12:58:28.439086 | 2025-11-08 12:58:28.439240 | TASK [validate-host : Write out all zuul information for each host] 2025-11-08 12:58:28.989937 | orchestrator -> localhost | changed 2025-11-08 12:58:29.001456 | 2025-11-08 12:58:29.001612 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-11-08 12:58:29.288758 | orchestrator | ok 2025-11-08 12:58:29.298592 | 2025-11-08 12:58:29.298741 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-11-08 12:59:14.786669 | orchestrator | changed: 2025-11-08 12:59:14.786982 | orchestrator | .d..t...... src/ 2025-11-08 12:59:14.787026 | orchestrator | .d..t...... src/github.com/ 2025-11-08 12:59:14.787055 | orchestrator | .d..t...... src/github.com/osism/ 2025-11-08 12:59:14.787079 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-11-08 12:59:14.787101 | orchestrator | RedHat.yml 2025-11-08 12:59:14.801098 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-11-08 12:59:14.801115 | orchestrator | RedHat.yml 2025-11-08 12:59:14.801167 | orchestrator | = 1.53.0"... 2025-11-08 12:59:27.520604 | orchestrator | 12:59:27.520 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-11-08 12:59:27.539764 | orchestrator | 12:59:27.539 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-11-08 12:59:28.212710 | orchestrator | 12:59:28.212 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-11-08 12:59:29.076892 | orchestrator | 12:59:29.076 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-11-08 12:59:29.439581 | orchestrator | 12:59:29.439 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-11-08 12:59:30.051481 | orchestrator | 12:59:30.051 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-11-08 12:59:30.620031 | orchestrator | 12:59:30.619 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-11-08 12:59:31.206218 | orchestrator | 12:59:31.205 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-11-08 12:59:31.206298 | orchestrator | 12:59:31.206 STDOUT terraform: Providers are signed by their developers. 2025-11-08 12:59:31.206306 | orchestrator | 12:59:31.206 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-11-08 12:59:31.206311 | orchestrator | 12:59:31.206 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-11-08 12:59:31.206318 | orchestrator | 12:59:31.206 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-11-08 12:59:31.206450 | orchestrator | 12:59:31.206 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-11-08 12:59:31.206481 | orchestrator | 12:59:31.206 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-11-08 12:59:31.206530 | orchestrator | 12:59:31.206 STDOUT terraform: you run "tofu init" in the future. 2025-11-08 12:59:31.206537 | orchestrator | 12:59:31.206 STDOUT terraform: OpenTofu has been successfully initialized! 2025-11-08 12:59:31.206617 | orchestrator | 12:59:31.206 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-11-08 12:59:31.206667 | orchestrator | 12:59:31.206 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-11-08 12:59:31.206673 | orchestrator | 12:59:31.206 STDOUT terraform: should now work. 2025-11-08 12:59:31.206812 | orchestrator | 12:59:31.206 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-11-08 12:59:31.206822 | orchestrator | 12:59:31.206 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-11-08 12:59:31.206827 | orchestrator | 12:59:31.206 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-11-08 12:59:31.522404 | orchestrator | 12:59:31.522 STDOUT terraform: Created and switched to workspace "ci"! 2025-11-08 12:59:31.522539 | orchestrator | 12:59:31.522 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-11-08 12:59:31.522564 | orchestrator | 12:59:31.522 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-11-08 12:59:31.522571 | orchestrator | 12:59:31.522 STDOUT terraform: for this configuration. 2025-11-08 12:59:31.766347 | orchestrator | 12:59:31.765 STDOUT terraform: ci.auto.tfvars 2025-11-08 12:59:31.971127 | orchestrator | 12:59:31.970 STDOUT terraform: default_custom.tf 2025-11-08 12:59:33.095902 | orchestrator | 12:59:33.095 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-11-08 12:59:33.607052 | orchestrator | 12:59:33.606 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-11-08 12:59:33.870290 | orchestrator | 12:59:33.870 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-11-08 12:59:33.870559 | orchestrator | 12:59:33.870 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-11-08 12:59:33.870570 | orchestrator | 12:59:33.870 STDOUT terraform:  + create 2025-11-08 12:59:33.870576 | orchestrator | 12:59:33.870 STDOUT terraform:  <= read (data resources) 2025-11-08 12:59:33.870581 | orchestrator | 12:59:33.870 STDOUT terraform: OpenTofu will perform the following actions: 2025-11-08 12:59:33.870586 | orchestrator | 12:59:33.870 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-11-08 12:59:33.870599 | orchestrator | 12:59:33.870 STDOUT terraform:  # (config refers to values not yet known) 2025-11-08 12:59:33.870603 | orchestrator | 12:59:33.870 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-11-08 12:59:33.870607 | orchestrator | 12:59:33.870 STDOUT terraform:  + checksum = (known after apply) 2025-11-08 12:59:33.870611 | orchestrator | 12:59:33.870 STDOUT terraform:  + created_at = (known after apply) 2025-11-08 12:59:33.870615 | orchestrator | 12:59:33.870 STDOUT terraform:  + file = (known after apply) 2025-11-08 12:59:33.870619 | orchestrator | 12:59:33.870 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.870625 | orchestrator | 12:59:33.870 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.870629 | orchestrator | 12:59:33.870 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-11-08 12:59:33.870633 | orchestrator | 12:59:33.870 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-11-08 12:59:33.870638 | orchestrator | 12:59:33.870 STDOUT terraform:  + most_recent = true 2025-11-08 12:59:33.870675 | orchestrator | 12:59:33.870 STDOUT terraform:  + name = (known after apply) 2025-11-08 12:59:33.870760 | orchestrator | 12:59:33.870 STDOUT terraform:  + protected = (known after apply) 2025-11-08 12:59:33.870767 | orchestrator | 12:59:33.870 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.870774 | orchestrator | 12:59:33.870 STDOUT terraform:  + schema = (known after apply) 2025-11-08 12:59:33.870818 | orchestrator | 12:59:33.870 STDOUT terraform:  + size_bytes = (known after apply) 2025-11-08 12:59:33.870834 | orchestrator | 12:59:33.870 STDOUT terraform:  + tags = (known after apply) 2025-11-08 12:59:33.870865 | orchestrator | 12:59:33.870 STDOUT terraform:  + updated_at = (known after apply) 2025-11-08 12:59:33.870886 | orchestrator | 12:59:33.870 STDOUT terraform:  } 2025-11-08 12:59:33.870950 | orchestrator | 12:59:33.870 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-11-08 12:59:33.870958 | orchestrator | 12:59:33.870 STDOUT terraform:  # (config refers to values not yet known) 2025-11-08 12:59:33.870994 | orchestrator | 12:59:33.870 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-11-08 12:59:33.871035 | orchestrator | 12:59:33.870 STDOUT terraform:  + checksum = (known after apply) 2025-11-08 12:59:33.871056 | orchestrator | 12:59:33.871 STDOUT terraform:  + created_at = (known after apply) 2025-11-08 12:59:33.871086 | orchestrator | 12:59:33.871 STDOUT terraform:  + file = (known after apply) 2025-11-08 12:59:33.871119 | orchestrator | 12:59:33.871 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.871145 | orchestrator | 12:59:33.871 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.871183 | orchestrator | 12:59:33.871 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-11-08 12:59:33.871206 | orchestrator | 12:59:33.871 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-11-08 12:59:33.871224 | orchestrator | 12:59:33.871 STDOUT terraform:  + most_recent = true 2025-11-08 12:59:33.871252 | orchestrator | 12:59:33.871 STDOUT terraform:  + name = (known after apply) 2025-11-08 12:59:33.871283 | orchestrator | 12:59:33.871 STDOUT terraform:  + protected = (known after apply) 2025-11-08 12:59:33.871309 | orchestrator | 12:59:33.871 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.871346 | orchestrator | 12:59:33.871 STDOUT terraform:  + schema = (known after apply) 2025-11-08 12:59:33.871381 | orchestrator | 12:59:33.871 STDOUT terraform:  + size_bytes = (known after apply) 2025-11-08 12:59:33.871409 | orchestrator | 12:59:33.871 STDOUT terraform:  + tags = (known after apply) 2025-11-08 12:59:33.871446 | orchestrator | 12:59:33.871 STDOUT terraform:  + updated_at = (known after apply) 2025-11-08 12:59:33.871452 | orchestrator | 12:59:33.871 STDOUT terraform:  } 2025-11-08 12:59:33.871475 | orchestrator | 12:59:33.871 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-11-08 12:59:33.871505 | orchestrator | 12:59:33.871 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-11-08 12:59:33.871543 | orchestrator | 12:59:33.871 STDOUT terraform:  + content = (known after apply) 2025-11-08 12:59:33.871580 | orchestrator | 12:59:33.871 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-08 12:59:33.871614 | orchestrator | 12:59:33.871 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-08 12:59:33.871646 | orchestrator | 12:59:33.871 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-08 12:59:33.871700 | orchestrator | 12:59:33.871 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-08 12:59:33.871720 | orchestrator | 12:59:33.871 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-08 12:59:33.871753 | orchestrator | 12:59:33.871 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-08 12:59:33.871784 | orchestrator | 12:59:33.871 STDOUT terraform:  + directory_permission = "0777" 2025-11-08 12:59:33.871795 | orchestrator | 12:59:33.871 STDOUT terraform:  + file_permission = "0644" 2025-11-08 12:59:33.871832 | orchestrator | 12:59:33.871 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-11-08 12:59:33.871874 | orchestrator | 12:59:33.871 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.871881 | orchestrator | 12:59:33.871 STDOUT terraform:  } 2025-11-08 12:59:33.871910 | orchestrator | 12:59:33.871 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-11-08 12:59:33.871956 | orchestrator | 12:59:33.871 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-11-08 12:59:33.871995 | orchestrator | 12:59:33.871 STDOUT terraform:  + content = (known after apply) 2025-11-08 12:59:33.872041 | orchestrator | 12:59:33.871 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-08 12:59:33.872068 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-08 12:59:33.872112 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-08 12:59:33.872140 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-08 12:59:33.872176 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-08 12:59:33.872227 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-08 12:59:33.872250 | orchestrator | 12:59:33.872 STDOUT terraform:  + directory_permission = "0777" 2025-11-08 12:59:33.872281 | orchestrator | 12:59:33.872 STDOUT terraform:  + file_permission = "0644" 2025-11-08 12:59:33.872306 | orchestrator | 12:59:33.872 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-11-08 12:59:33.872345 | orchestrator | 12:59:33.872 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.872351 | orchestrator | 12:59:33.872 STDOUT terraform:  } 2025-11-08 12:59:33.872377 | orchestrator | 12:59:33.872 STDOUT terraform:  # local_file.inventory will be created 2025-11-08 12:59:33.872401 | orchestrator | 12:59:33.872 STDOUT terraform:  + resource "local_file" "inventory" { 2025-11-08 12:59:33.872436 | orchestrator | 12:59:33.872 STDOUT terraform:  + content = (known after apply) 2025-11-08 12:59:33.872470 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-08 12:59:33.872505 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-08 12:59:33.872542 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-08 12:59:33.872570 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-08 12:59:33.872612 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-08 12:59:33.872638 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-08 12:59:33.872666 | orchestrator | 12:59:33.872 STDOUT terraform:  + directory_permission = "0777" 2025-11-08 12:59:33.872696 | orchestrator | 12:59:33.872 STDOUT terraform:  + file_permission = "0644" 2025-11-08 12:59:33.872713 | orchestrator | 12:59:33.872 STDOUT terraform:  + filename = "inventory.ci" 2025-11-08 12:59:33.872751 | orchestrator | 12:59:33.872 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.872757 | orchestrator | 12:59:33.872 STDOUT terraform:  } 2025-11-08 12:59:33.872797 | orchestrator | 12:59:33.872 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-11-08 12:59:33.872821 | orchestrator | 12:59:33.872 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-11-08 12:59:33.872849 | orchestrator | 12:59:33.872 STDOUT terraform:  + content = (sensitive value) 2025-11-08 12:59:33.872897 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-08 12:59:33.872980 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-08 12:59:33.872988 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-08 12:59:33.873022 | orchestrator | 12:59:33.872 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-08 12:59:33.873062 | orchestrator | 12:59:33.873 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-08 12:59:33.873088 | orchestrator | 12:59:33.873 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-08 12:59:33.873113 | orchestrator | 12:59:33.873 STDOUT terraform:  + directory_permission = "0700" 2025-11-08 12:59:33.873150 | orchestrator | 12:59:33.873 STDOUT terraform:  + file_permission = "0600" 2025-11-08 12:59:33.873170 | orchestrator | 12:59:33.873 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-11-08 12:59:33.873205 | orchestrator | 12:59:33.873 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.873239 | orchestrator | 12:59:33.873 STDOUT terraform:  } 2025-11-08 12:59:33.873245 | orchestrator | 12:59:33.873 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-11-08 12:59:33.873274 | orchestrator | 12:59:33.873 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-11-08 12:59:33.873298 | orchestrator | 12:59:33.873 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.873312 | orchestrator | 12:59:33.873 STDOUT terraform:  } 2025-11-08 12:59:33.873355 | orchestrator | 12:59:33.873 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-11-08 12:59:33.873412 | orchestrator | 12:59:33.873 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-11-08 12:59:33.873434 | orchestrator | 12:59:33.873 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.873460 | orchestrator | 12:59:33.873 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.873499 | orchestrator | 12:59:33.873 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.873530 | orchestrator | 12:59:33.873 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.873581 | orchestrator | 12:59:33.873 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.873607 | orchestrator | 12:59:33.873 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-11-08 12:59:33.873651 | orchestrator | 12:59:33.873 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.873657 | orchestrator | 12:59:33.873 STDOUT terraform:  + size = 80 2025-11-08 12:59:33.873678 | orchestrator | 12:59:33.873 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.873703 | orchestrator | 12:59:33.873 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.873709 | orchestrator | 12:59:33.873 STDOUT terraform:  } 2025-11-08 12:59:33.873762 | orchestrator | 12:59:33.873 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-11-08 12:59:33.873802 | orchestrator | 12:59:33.873 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-08 12:59:33.873849 | orchestrator | 12:59:33.873 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.873855 | orchestrator | 12:59:33.873 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.873890 | orchestrator | 12:59:33.873 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.873941 | orchestrator | 12:59:33.873 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.873964 | orchestrator | 12:59:33.873 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.874040 | orchestrator | 12:59:33.873 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-11-08 12:59:33.874621 | orchestrator | 12:59:33.874 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.874698 | orchestrator | 12:59:33.874 STDOUT terraform:  + size = 80 2025-11-08 12:59:33.874758 | orchestrator | 12:59:33.874 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.874793 | orchestrator | 12:59:33.874 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.874816 | orchestrator | 12:59:33.874 STDOUT terraform:  } 2025-11-08 12:59:33.874889 | orchestrator | 12:59:33.874 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-11-08 12:59:33.874969 | orchestrator | 12:59:33.874 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-08 12:59:33.875017 | orchestrator | 12:59:33.874 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.875052 | orchestrator | 12:59:33.875 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.875111 | orchestrator | 12:59:33.875 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.875159 | orchestrator | 12:59:33.875 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.875203 | orchestrator | 12:59:33.875 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.875253 | orchestrator | 12:59:33.875 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-11-08 12:59:33.875295 | orchestrator | 12:59:33.875 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.875322 | orchestrator | 12:59:33.875 STDOUT terraform:  + size = 80 2025-11-08 12:59:33.875353 | orchestrator | 12:59:33.875 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.875383 | orchestrator | 12:59:33.875 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.875404 | orchestrator | 12:59:33.875 STDOUT terraform:  } 2025-11-08 12:59:33.875557 | orchestrator | 12:59:33.875 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-11-08 12:59:33.875624 | orchestrator | 12:59:33.875 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-08 12:59:33.875668 | orchestrator | 12:59:33.875 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.875698 | orchestrator | 12:59:33.875 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.875740 | orchestrator | 12:59:33.875 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.875782 | orchestrator | 12:59:33.875 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.875827 | orchestrator | 12:59:33.875 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.875898 | orchestrator | 12:59:33.875 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-11-08 12:59:33.875980 | orchestrator | 12:59:33.875 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.876021 | orchestrator | 12:59:33.875 STDOUT terraform:  + size = 80 2025-11-08 12:59:33.876053 | orchestrator | 12:59:33.876 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.876085 | orchestrator | 12:59:33.876 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.876106 | orchestrator | 12:59:33.876 STDOUT terraform:  } 2025-11-08 12:59:33.876176 | orchestrator | 12:59:33.876 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-11-08 12:59:33.876228 | orchestrator | 12:59:33.876 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-08 12:59:33.876277 | orchestrator | 12:59:33.876 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.876307 | orchestrator | 12:59:33.876 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.876349 | orchestrator | 12:59:33.876 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.876390 | orchestrator | 12:59:33.876 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.876432 | orchestrator | 12:59:33.876 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.876483 | orchestrator | 12:59:33.876 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-11-08 12:59:33.876525 | orchestrator | 12:59:33.876 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.876552 | orchestrator | 12:59:33.876 STDOUT terraform:  + size = 80 2025-11-08 12:59:33.876582 | orchestrator | 12:59:33.876 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.876611 | orchestrator | 12:59:33.876 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.876631 | orchestrator | 12:59:33.876 STDOUT terraform:  } 2025-11-08 12:59:33.876684 | orchestrator | 12:59:33.876 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-11-08 12:59:33.876734 | orchestrator | 12:59:33.876 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-08 12:59:33.876777 | orchestrator | 12:59:33.876 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.876816 | orchestrator | 12:59:33.876 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.876860 | orchestrator | 12:59:33.876 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.876900 | orchestrator | 12:59:33.876 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.876955 | orchestrator | 12:59:33.876 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.877009 | orchestrator | 12:59:33.876 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-11-08 12:59:33.877051 | orchestrator | 12:59:33.877 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.877079 | orchestrator | 12:59:33.877 STDOUT terraform:  + size = 80 2025-11-08 12:59:33.877109 | orchestrator | 12:59:33.877 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.877139 | orchestrator | 12:59:33.877 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.877161 | orchestrator | 12:59:33.877 STDOUT terraform:  } 2025-11-08 12:59:33.877215 | orchestrator | 12:59:33.877 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-11-08 12:59:33.877270 | orchestrator | 12:59:33.877 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-08 12:59:33.877313 | orchestrator | 12:59:33.877 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.877344 | orchestrator | 12:59:33.877 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.877386 | orchestrator | 12:59:33.877 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.877428 | orchestrator | 12:59:33.877 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.877469 | orchestrator | 12:59:33.877 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.877538 | orchestrator | 12:59:33.877 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-11-08 12:59:33.877582 | orchestrator | 12:59:33.877 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.877611 | orchestrator | 12:59:33.877 STDOUT terraform:  + size = 80 2025-11-08 12:59:33.877643 | orchestrator | 12:59:33.877 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.877677 | orchestrator | 12:59:33.877 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.877697 | orchestrator | 12:59:33.877 STDOUT terraform:  } 2025-11-08 12:59:33.877747 | orchestrator | 12:59:33.877 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-11-08 12:59:33.877796 | orchestrator | 12:59:33.877 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-08 12:59:33.877836 | orchestrator | 12:59:33.877 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.877867 | orchestrator | 12:59:33.877 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.877935 | orchestrator | 12:59:33.877 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.877983 | orchestrator | 12:59:33.877 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.878042 | orchestrator | 12:59:33.877 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-11-08 12:59:33.878093 | orchestrator | 12:59:33.878 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.878122 | orchestrator | 12:59:33.878 STDOUT terraform:  + size = 20 2025-11-08 12:59:33.878155 | orchestrator | 12:59:33.878 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.878187 | orchestrator | 12:59:33.878 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.878207 | orchestrator | 12:59:33.878 STDOUT terraform:  } 2025-11-08 12:59:33.878257 | orchestrator | 12:59:33.878 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-11-08 12:59:33.878306 | orchestrator | 12:59:33.878 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-08 12:59:33.878350 | orchestrator | 12:59:33.878 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.878382 | orchestrator | 12:59:33.878 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.878429 | orchestrator | 12:59:33.878 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.878471 | orchestrator | 12:59:33.878 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.878515 | orchestrator | 12:59:33.878 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-11-08 12:59:33.878557 | orchestrator | 12:59:33.878 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.878584 | orchestrator | 12:59:33.878 STDOUT terraform:  + size = 20 2025-11-08 12:59:33.878616 | orchestrator | 12:59:33.878 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.878646 | orchestrator | 12:59:33.878 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.878666 | orchestrator | 12:59:33.878 STDOUT terraform:  } 2025-11-08 12:59:33.878716 | orchestrator | 12:59:33.878 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-11-08 12:59:33.878766 | orchestrator | 12:59:33.878 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-08 12:59:33.878809 | orchestrator | 12:59:33.878 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.878840 | orchestrator | 12:59:33.878 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.878881 | orchestrator | 12:59:33.878 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.878934 | orchestrator | 12:59:33.878 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.878979 | orchestrator | 12:59:33.878 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-11-08 12:59:33.879022 | orchestrator | 12:59:33.878 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.879052 | orchestrator | 12:59:33.879 STDOUT terraform:  + size = 20 2025-11-08 12:59:33.879083 | orchestrator | 12:59:33.879 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.879114 | orchestrator | 12:59:33.879 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.879134 | orchestrator | 12:59:33.879 STDOUT terraform:  } 2025-11-08 12:59:33.879186 | orchestrator | 12:59:33.879 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-11-08 12:59:33.879240 | orchestrator | 12:59:33.879 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-08 12:59:33.879282 | orchestrator | 12:59:33.879 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.879313 | orchestrator | 12:59:33.879 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.879354 | orchestrator | 12:59:33.879 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.879395 | orchestrator | 12:59:33.879 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.879440 | orchestrator | 12:59:33.879 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-11-08 12:59:33.879483 | orchestrator | 12:59:33.879 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.879511 | orchestrator | 12:59:33.879 STDOUT terraform:  + size = 20 2025-11-08 12:59:33.879541 | orchestrator | 12:59:33.879 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.879571 | orchestrator | 12:59:33.879 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.879591 | orchestrator | 12:59:33.879 STDOUT terraform:  } 2025-11-08 12:59:33.879643 | orchestrator | 12:59:33.879 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-11-08 12:59:33.879692 | orchestrator | 12:59:33.879 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-08 12:59:33.879733 | orchestrator | 12:59:33.879 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.879766 | orchestrator | 12:59:33.879 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.879808 | orchestrator | 12:59:33.879 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.879849 | orchestrator | 12:59:33.879 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.879893 | orchestrator | 12:59:33.879 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-11-08 12:59:33.879958 | orchestrator | 12:59:33.879 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.879987 | orchestrator | 12:59:33.879 STDOUT terraform:  + size = 20 2025-11-08 12:59:33.880019 | orchestrator | 12:59:33.879 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.880049 | orchestrator | 12:59:33.880 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.880068 | orchestrator | 12:59:33.880 STDOUT terraform:  } 2025-11-08 12:59:33.880190 | orchestrator | 12:59:33.880 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-11-08 12:59:33.880246 | orchestrator | 12:59:33.880 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-08 12:59:33.880289 | orchestrator | 12:59:33.880 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.880322 | orchestrator | 12:59:33.880 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.880366 | orchestrator | 12:59:33.880 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.880409 | orchestrator | 12:59:33.880 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.880462 | orchestrator | 12:59:33.880 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-11-08 12:59:33.881169 | orchestrator | 12:59:33.881 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.881201 | orchestrator | 12:59:33.881 STDOUT terraform:  + size = 20 2025-11-08 12:59:33.881235 | orchestrator | 12:59:33.881 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.881267 | orchestrator | 12:59:33.881 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.881291 | orchestrator | 12:59:33.881 STDOUT terraform:  } 2025-11-08 12:59:33.881343 | orchestrator | 12:59:33.881 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-11-08 12:59:33.881393 | orchestrator | 12:59:33.881 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-08 12:59:33.881436 | orchestrator | 12:59:33.881 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.881467 | orchestrator | 12:59:33.881 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.881509 | orchestrator | 12:59:33.881 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.881552 | orchestrator | 12:59:33.881 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.881596 | orchestrator | 12:59:33.881 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-11-08 12:59:33.881638 | orchestrator | 12:59:33.881 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.881667 | orchestrator | 12:59:33.881 STDOUT terraform:  + size = 20 2025-11-08 12:59:33.881698 | orchestrator | 12:59:33.881 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.881729 | orchestrator | 12:59:33.881 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.881750 | orchestrator | 12:59:33.881 STDOUT terraform:  } 2025-11-08 12:59:33.881803 | orchestrator | 12:59:33.881 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-11-08 12:59:33.881855 | orchestrator | 12:59:33.881 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-08 12:59:33.881896 | orchestrator | 12:59:33.881 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.881953 | orchestrator | 12:59:33.881 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.882000 | orchestrator | 12:59:33.881 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.882058 | orchestrator | 12:59:33.882 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.882108 | orchestrator | 12:59:33.882 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-11-08 12:59:33.882151 | orchestrator | 12:59:33.882 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.882179 | orchestrator | 12:59:33.882 STDOUT terraform:  + size = 20 2025-11-08 12:59:33.882209 | orchestrator | 12:59:33.882 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.882242 | orchestrator | 12:59:33.882 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.882262 | orchestrator | 12:59:33.882 STDOUT terraform:  } 2025-11-08 12:59:33.882318 | orchestrator | 12:59:33.882 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-11-08 12:59:33.882368 | orchestrator | 12:59:33.882 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-08 12:59:33.882412 | orchestrator | 12:59:33.882 STDOUT terraform:  + attachment = (known after apply) 2025-11-08 12:59:33.882443 | orchestrator | 12:59:33.882 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.882484 | orchestrator | 12:59:33.882 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.882525 | orchestrator | 12:59:33.882 STDOUT terraform:  + metadata = (known after apply) 2025-11-08 12:59:33.882568 | orchestrator | 12:59:33.882 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-11-08 12:59:33.882613 | orchestrator | 12:59:33.882 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.882641 | orchestrator | 12:59:33.882 STDOUT terraform:  + size = 20 2025-11-08 12:59:33.882675 | orchestrator | 12:59:33.882 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-08 12:59:33.882709 | orchestrator | 12:59:33.882 STDOUT terraform:  + volume_type = "ssd" 2025-11-08 12:59:33.882730 | orchestrator | 12:59:33.882 STDOUT terraform:  } 2025-11-08 12:59:33.882778 | orchestrator | 12:59:33.882 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-11-08 12:59:33.882827 | orchestrator | 12:59:33.882 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-11-08 12:59:33.882868 | orchestrator | 12:59:33.882 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-08 12:59:33.882909 | orchestrator | 12:59:33.882 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-08 12:59:33.882963 | orchestrator | 12:59:33.882 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-08 12:59:33.883005 | orchestrator | 12:59:33.882 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.883034 | orchestrator | 12:59:33.883 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.883062 | orchestrator | 12:59:33.883 STDOUT terraform:  + config_drive = true 2025-11-08 12:59:33.883105 | orchestrator | 12:59:33.883 STDOUT terraform:  + created = (known after apply) 2025-11-08 12:59:33.883146 | orchestrator | 12:59:33.883 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-08 12:59:33.883188 | orchestrator | 12:59:33.883 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-11-08 12:59:33.883218 | orchestrator | 12:59:33.883 STDOUT terraform:  + force_delete = false 2025-11-08 12:59:33.883258 | orchestrator | 12:59:33.883 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-08 12:59:33.883298 | orchestrator | 12:59:33.883 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.883340 | orchestrator | 12:59:33.883 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.883380 | orchestrator | 12:59:33.883 STDOUT terraform:  + image_name = (known after apply) 2025-11-08 12:59:33.883415 | orchestrator | 12:59:33.883 STDOUT terraform:  + key_pair = "testbed" 2025-11-08 12:59:33.883457 | orchestrator | 12:59:33.883 STDOUT terraform:  + name = "testbed-manager" 2025-11-08 12:59:33.883489 | orchestrator | 12:59:33.883 STDOUT terraform:  + power_state = "active" 2025-11-08 12:59:33.887323 | orchestrator | 12:59:33.885 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.887363 | orchestrator | 12:59:33.885 STDOUT terraform:  + security_groups = (known after apply) 2025-11-08 12:59:33.887370 | orchestrator | 12:59:33.886 STDOUT terraform:  + stop_before_destroy = false 2025-11-08 12:59:33.887374 | orchestrator | 12:59:33.886 STDOUT terraform:  + updated = (known after apply) 2025-11-08 12:59:33.887378 | orchestrator | 12:59:33.886 STDOUT terraform:  + user_data = (sensitive value) 2025-11-08 12:59:33.887382 | orchestrator | 12:59:33.886 STDOUT terraform:  + block_device { 2025-11-08 12:59:33.887386 | orchestrator | 12:59:33.886 STDOUT terraform:  + boot_index = 0 2025-11-08 12:59:33.887390 | orchestrator | 12:59:33.886 STDOUT terraform:  + delete_on_termination = false 2025-11-08 12:59:33.887394 | orchestrator | 12:59:33.886 STDOUT terraform:  + destination_type = "volume" 2025-11-08 12:59:33.887397 | orchestrator | 12:59:33.886 STDOUT terraform:  + multiattach = false 2025-11-08 12:59:33.887401 | orchestrator | 12:59:33.886 STDOUT terraform:  + source_type = "volume" 2025-11-08 12:59:33.887405 | orchestrator | 12:59:33.886 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.887409 | orchestrator | 12:59:33.887 STDOUT terraform:  } 2025-11-08 12:59:33.887413 | orchestrator | 12:59:33.887 STDOUT terraform:  + network { 2025-11-08 12:59:33.887417 | orchestrator | 12:59:33.887 STDOUT terraform:  + access_network = false 2025-11-08 12:59:33.887421 | orchestrator | 12:59:33.887 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-08 12:59:33.887440 | orchestrator | 12:59:33.887 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-08 12:59:33.899202 | orchestrator | 12:59:33.887 STDOUT terraform:  + mac = (known after apply) 2025-11-08 12:59:33.899398 | orchestrator | 12:59:33.899 STDOUT terraform:  + name = (known after apply) 2025-11-08 12:59:33.899468 | orchestrator | 12:59:33.899 STDOUT terraform:  + port = (known after apply) 2025-11-08 12:59:33.899615 | orchestrator | 12:59:33.899 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.899650 | orchestrator | 12:59:33.899 STDOUT terraform:  } 2025-11-08 12:59:33.899721 | orchestrator | 12:59:33.899 STDOUT terraform:  } 2025-11-08 12:59:33.900367 | orchestrator | 12:59:33.899 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-11-08 12:59:33.900394 | orchestrator | 12:59:33.899 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-08 12:59:33.900398 | orchestrator | 12:59:33.900 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-08 12:59:33.905777 | orchestrator | 12:59:33.900 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-08 12:59:33.905810 | orchestrator | 12:59:33.901 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-08 12:59:33.905826 | orchestrator | 12:59:33.901 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.905830 | orchestrator | 12:59:33.901 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.905835 | orchestrator | 12:59:33.901 STDOUT terraform:  + config_drive = true 2025-11-08 12:59:33.905838 | orchestrator | 12:59:33.901 STDOUT terraform:  + created = (known after apply) 2025-11-08 12:59:33.905842 | orchestrator | 12:59:33.901 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-08 12:59:33.905846 | orchestrator | 12:59:33.901 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-08 12:59:33.905849 | orchestrator | 12:59:33.901 STDOUT terraform:  + force_delete = false 2025-11-08 12:59:33.905853 | orchestrator | 12:59:33.901 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-08 12:59:33.905857 | orchestrator | 12:59:33.901 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.905860 | orchestrator | 12:59:33.901 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.905864 | orchestrator | 12:59:33.901 STDOUT terraform:  + image_name = (known after apply) 2025-11-08 12:59:33.905868 | orchestrator | 12:59:33.901 STDOUT terraform:  + key_pair = "testbed" 2025-11-08 12:59:33.905872 | orchestrator | 12:59:33.901 STDOUT terraform:  + name = "testbed-node-0" 2025-11-08 12:59:33.905875 | orchestrator | 12:59:33.901 STDOUT terraform:  + power_state = "active" 2025-11-08 12:59:33.905879 | orchestrator | 12:59:33.901 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.905894 | orchestrator | 12:59:33.901 STDOUT terraform:  + security_groups = (known after apply) 2025-11-08 12:59:33.905898 | orchestrator | 12:59:33.901 STDOUT terraform:  + stop_before_destroy = false 2025-11-08 12:59:33.905902 | orchestrator | 12:59:33.901 STDOUT terraform:  + updated = (known after apply) 2025-11-08 12:59:33.905906 | orchestrator | 12:59:33.901 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-08 12:59:33.905910 | orchestrator | 12:59:33.901 STDOUT terraform:  + block_device { 2025-11-08 12:59:33.905914 | orchestrator | 12:59:33.901 STDOUT terraform:  + boot_index = 0 2025-11-08 12:59:33.905956 | orchestrator | 12:59:33.901 STDOUT terraform:  + delete_on_termination = false 2025-11-08 12:59:33.905960 | orchestrator | 12:59:33.901 STDOUT terraform:  + destination_type = "volume" 2025-11-08 12:59:33.905964 | orchestrator | 12:59:33.901 STDOUT terraform:  + multiattach = false 2025-11-08 12:59:33.905968 | orchestrator | 12:59:33.901 STDOUT terraform:  + source_type = "volume" 2025-11-08 12:59:33.905972 | orchestrator | 12:59:33.902 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.905977 | orchestrator | 12:59:33.902 STDOUT terraform:  } 2025-11-08 12:59:33.905980 | orchestrator | 12:59:33.902 STDOUT terraform:  + network { 2025-11-08 12:59:33.905984 | orchestrator | 12:59:33.902 STDOUT terraform:  + access_network = false 2025-11-08 12:59:33.905988 | orchestrator | 12:59:33.902 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-08 12:59:33.905992 | orchestrator | 12:59:33.902 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-08 12:59:33.906000 | orchestrator | 12:59:33.902 STDOUT terraform:  + mac = (known after apply) 2025-11-08 12:59:33.906003 | orchestrator | 12:59:33.902 STDOUT terraform:  + name = (known after apply) 2025-11-08 12:59:33.906007 | orchestrator | 12:59:33.902 STDOUT terraform:  + port = (known after apply) 2025-11-08 12:59:33.906011 | orchestrator | 12:59:33.902 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.906036 | orchestrator | 12:59:33.902 STDOUT terraform:  } 2025-11-08 12:59:33.906051 | orchestrator | 12:59:33.902 STDOUT terraform:  } 2025-11-08 12:59:33.906055 | orchestrator | 12:59:33.902 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-11-08 12:59:33.906059 | orchestrator | 12:59:33.902 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-08 12:59:33.906063 | orchestrator | 12:59:33.902 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-08 12:59:33.906066 | orchestrator | 12:59:33.902 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-08 12:59:33.906070 | orchestrator | 12:59:33.902 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-08 12:59:33.906074 | orchestrator | 12:59:33.902 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.906077 | orchestrator | 12:59:33.902 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.906081 | orchestrator | 12:59:33.902 STDOUT terraform:  + config_drive = true 2025-11-08 12:59:33.906085 | orchestrator | 12:59:33.902 STDOUT terraform:  + created = (known after apply) 2025-11-08 12:59:33.906089 | orchestrator | 12:59:33.902 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-08 12:59:33.906092 | orchestrator | 12:59:33.902 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-08 12:59:33.906096 | orchestrator | 12:59:33.902 STDOUT terraform:  + force_delete = false 2025-11-08 12:59:33.906100 | orchestrator | 12:59:33.902 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-08 12:59:33.906104 | orchestrator | 12:59:33.902 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.906107 | orchestrator | 12:59:33.902 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.906111 | orchestrator | 12:59:33.902 STDOUT terraform:  + image_name = (known after apply) 2025-11-08 12:59:33.906115 | orchestrator | 12:59:33.902 STDOUT terraform:  + key_pair = "testbed" 2025-11-08 12:59:33.906118 | orchestrator | 12:59:33.902 STDOUT terraform:  + name = "testbed-node-1" 2025-11-08 12:59:33.906122 | orchestrator | 12:59:33.902 STDOUT terraform:  + power_state = "active" 2025-11-08 12:59:33.906125 | orchestrator | 12:59:33.902 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.906129 | orchestrator | 12:59:33.902 STDOUT terraform:  + security_groups = (known after apply) 2025-11-08 12:59:33.906133 | orchestrator | 12:59:33.902 STDOUT terraform:  + stop_before_destroy = false 2025-11-08 12:59:33.906137 | orchestrator | 12:59:33.902 STDOUT terraform:  + updated = (known after apply) 2025-11-08 12:59:33.906144 | orchestrator | 12:59:33.902 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-08 12:59:33.906148 | orchestrator | 12:59:33.902 STDOUT terraform:  + block_device { 2025-11-08 12:59:33.906151 | orchestrator | 12:59:33.902 STDOUT terraform:  + boot_index = 0 2025-11-08 12:59:33.906155 | orchestrator | 12:59:33.902 STDOUT terraform:  + delete_on_termination = false 2025-11-08 12:59:33.906159 | orchestrator | 12:59:33.903 STDOUT terraform:  + destination_type = "volume" 2025-11-08 12:59:33.906162 | orchestrator | 12:59:33.903 STDOUT terraform:  + multiattach = false 2025-11-08 12:59:33.906166 | orchestrator | 12:59:33.903 STDOUT terraform:  + source_type = "volume" 2025-11-08 12:59:33.906172 | orchestrator | 12:59:33.903 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.906176 | orchestrator | 12:59:33.903 STDOUT terraform:  } 2025-11-08 12:59:33.906180 | orchestrator | 12:59:33.903 STDOUT terraform:  + network { 2025-11-08 12:59:33.906184 | orchestrator | 12:59:33.903 STDOUT terraform:  + access_network = false 2025-11-08 12:59:33.906187 | orchestrator | 12:59:33.903 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-08 12:59:33.906191 | orchestrator | 12:59:33.903 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-08 12:59:33.906195 | orchestrator | 12:59:33.903 STDOUT terraform:  + mac = (known after apply) 2025-11-08 12:59:33.906202 | orchestrator | 12:59:33.903 STDOUT terraform:  + name = (known after apply) 2025-11-08 12:59:33.906211 | orchestrator | 12:59:33.903 STDOUT terraform:  + port = (known after apply) 2025-11-08 12:59:33.906215 | orchestrator | 12:59:33.903 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.906219 | orchestrator | 12:59:33.903 STDOUT terraform:  } 2025-11-08 12:59:33.906222 | orchestrator | 12:59:33.903 STDOUT terraform:  } 2025-11-08 12:59:33.906226 | orchestrator | 12:59:33.903 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-11-08 12:59:33.906230 | orchestrator | 12:59:33.903 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-08 12:59:33.906234 | orchestrator | 12:59:33.903 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-08 12:59:33.906237 | orchestrator | 12:59:33.903 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-08 12:59:33.906241 | orchestrator | 12:59:33.903 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-08 12:59:33.906245 | orchestrator | 12:59:33.903 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.906248 | orchestrator | 12:59:33.903 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.906252 | orchestrator | 12:59:33.903 STDOUT terraform:  + config_drive = true 2025-11-08 12:59:33.906256 | orchestrator | 12:59:33.903 STDOUT terraform:  + created = (known after apply) 2025-11-08 12:59:33.906259 | orchestrator | 12:59:33.903 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-08 12:59:33.906263 | orchestrator | 12:59:33.903 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-08 12:59:33.906270 | orchestrator | 12:59:33.903 STDOUT terraform:  + force_delete = false 2025-11-08 12:59:33.906274 | orchestrator | 12:59:33.903 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-08 12:59:33.906277 | orchestrator | 12:59:33.903 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.906281 | orchestrator | 12:59:33.903 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.906285 | orchestrator | 12:59:33.903 STDOUT terraform:  + image_name = (known after apply) 2025-11-08 12:59:33.906288 | orchestrator | 12:59:33.903 STDOUT terraform:  + key_pair = "testbed" 2025-11-08 12:59:33.906292 | orchestrator | 12:59:33.903 STDOUT terraform:  + name = "testbed-node-2" 2025-11-08 12:59:33.906296 | orchestrator | 12:59:33.903 STDOUT terraform:  + power_state = "active" 2025-11-08 12:59:33.906300 | orchestrator | 12:59:33.903 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.906303 | orchestrator | 12:59:33.903 STDOUT terraform:  + security_groups = (known after apply) 2025-11-08 12:59:33.906307 | orchestrator | 12:59:33.903 STDOUT terraform:  + stop_before_destroy = false 2025-11-08 12:59:33.906311 | orchestrator | 12:59:33.903 STDOUT terraform:  + updated = (known after apply) 2025-11-08 12:59:33.906314 | orchestrator | 12:59:33.903 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-08 12:59:33.906318 | orchestrator | 12:59:33.904 STDOUT terraform:  + block_device { 2025-11-08 12:59:33.906322 | orchestrator | 12:59:33.904 STDOUT terraform:  + boot_index = 0 2025-11-08 12:59:33.906325 | orchestrator | 12:59:33.904 STDOUT terraform:  + delete_on_termination = false 2025-11-08 12:59:33.906329 | orchestrator | 12:59:33.904 STDOUT terraform:  + destination_type = "volume" 2025-11-08 12:59:33.906333 | orchestrator | 12:59:33.904 STDOUT terraform:  + multiattach = false 2025-11-08 12:59:33.906336 | orchestrator | 12:59:33.904 STDOUT terraform:  + source_type = "volume" 2025-11-08 12:59:33.906340 | orchestrator | 12:59:33.904 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.906344 | orchestrator | 12:59:33.904 STDOUT terraform:  } 2025-11-08 12:59:33.906348 | orchestrator | 12:59:33.904 STDOUT terraform:  + network { 2025-11-08 12:59:33.906354 | orchestrator | 12:59:33.904 STDOUT terraform:  + access_network = false 2025-11-08 12:59:33.906358 | orchestrator | 12:59:33.904 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-08 12:59:33.906361 | orchestrator | 12:59:33.904 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-08 12:59:33.906365 | orchestrator | 12:59:33.904 STDOUT terraform:  + mac = (known after apply) 2025-11-08 12:59:33.906369 | orchestrator | 12:59:33.904 STDOUT terraform:  + name = (known after apply) 2025-11-08 12:59:33.906373 | orchestrator | 12:59:33.904 STDOUT terraform:  + port = (known after apply) 2025-11-08 12:59:33.906376 | orchestrator | 12:59:33.904 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.906380 | orchestrator | 12:59:33.904 STDOUT terraform:  } 2025-11-08 12:59:33.906384 | orchestrator | 12:59:33.904 STDOUT terraform:  } 2025-11-08 12:59:33.906391 | orchestrator | 12:59:33.904 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-11-08 12:59:33.906394 | orchestrator | 12:59:33.904 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-08 12:59:33.906398 | orchestrator | 12:59:33.904 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-08 12:59:33.906402 | orchestrator | 12:59:33.904 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-08 12:59:33.906405 | orchestrator | 12:59:33.904 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-08 12:59:33.906414 | orchestrator | 12:59:33.904 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.906418 | orchestrator | 12:59:33.904 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.906421 | orchestrator | 12:59:33.904 STDOUT terraform:  + config_drive = true 2025-11-08 12:59:33.906425 | orchestrator | 12:59:33.904 STDOUT terraform:  + created = (known after apply) 2025-11-08 12:59:33.906429 | orchestrator | 12:59:33.904 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-08 12:59:33.906433 | orchestrator | 12:59:33.904 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-08 12:59:33.906436 | orchestrator | 12:59:33.904 STDOUT terraform:  + force_delete = false 2025-11-08 12:59:33.906440 | orchestrator | 12:59:33.904 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-08 12:59:33.906444 | orchestrator | 12:59:33.904 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.906447 | orchestrator | 12:59:33.904 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.906451 | orchestrator | 12:59:33.904 STDOUT terraform:  + image_name = (known after apply) 2025-11-08 12:59:33.906455 | orchestrator | 12:59:33.904 STDOUT terraform:  + key_pair = "testbed" 2025-11-08 12:59:33.906458 | orchestrator | 12:59:33.904 STDOUT terraform:  + name = "testbed-node-3" 2025-11-08 12:59:33.906462 | orchestrator | 12:59:33.904 STDOUT terraform:  + power_state = "active" 2025-11-08 12:59:33.906466 | orchestrator | 12:59:33.904 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.906470 | orchestrator | 12:59:33.904 STDOUT terraform:  + security_groups = (known after apply) 2025-11-08 12:59:33.906473 | orchestrator | 12:59:33.904 STDOUT terraform:  + stop_before_destroy = false 2025-11-08 12:59:33.906477 | orchestrator | 12:59:33.905 STDOUT terraform:  + updated = (known after apply) 2025-11-08 12:59:33.906481 | orchestrator | 12:59:33.905 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-08 12:59:33.906484 | orchestrator | 12:59:33.905 STDOUT terraform:  + block_device { 2025-11-08 12:59:33.906488 | orchestrator | 12:59:33.905 STDOUT terraform:  + boot_index = 0 2025-11-08 12:59:33.906492 | orchestrator | 12:59:33.905 STDOUT terraform:  + delete_on_termination = false 2025-11-08 12:59:33.906495 | orchestrator | 12:59:33.905 STDOUT terraform:  + destination_type = "volume" 2025-11-08 12:59:33.906503 | orchestrator | 12:59:33.905 STDOUT terraform:  + multiattach = false 2025-11-08 12:59:33.906512 | orchestrator | 12:59:33.905 STDOUT terraform:  + source_type = "volume" 2025-11-08 12:59:33.906517 | orchestrator | 12:59:33.905 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.906521 | orchestrator | 12:59:33.905 STDOUT terraform:  } 2025-11-08 12:59:33.906525 | orchestrator | 12:59:33.905 STDOUT terraform:  + network { 2025-11-08 12:59:33.906529 | orchestrator | 12:59:33.905 STDOUT terraform:  + access_network = false 2025-11-08 12:59:33.906532 | orchestrator | 12:59:33.905 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-08 12:59:33.906536 | orchestrator | 12:59:33.905 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-08 12:59:33.906540 | orchestrator | 12:59:33.905 STDOUT terraform:  + mac = (known after apply) 2025-11-08 12:59:33.906543 | orchestrator | 12:59:33.905 STDOUT terraform:  + name = (known after apply) 2025-11-08 12:59:33.906547 | orchestrator | 12:59:33.905 STDOUT terraform:  + port = (known after apply) 2025-11-08 12:59:33.906551 | orchestrator | 12:59:33.905 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.906554 | orchestrator | 12:59:33.905 STDOUT terraform:  } 2025-11-08 12:59:33.906558 | orchestrator | 12:59:33.905 STDOUT terraform:  } 2025-11-08 12:59:33.906562 | orchestrator | 12:59:33.905 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-11-08 12:59:33.906565 | orchestrator | 12:59:33.905 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-08 12:59:33.906569 | orchestrator | 12:59:33.905 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-08 12:59:33.906573 | orchestrator | 12:59:33.905 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-08 12:59:33.906576 | orchestrator | 12:59:33.905 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-08 12:59:33.906580 | orchestrator | 12:59:33.905 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.906584 | orchestrator | 12:59:33.905 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.906588 | orchestrator | 12:59:33.905 STDOUT terraform:  + config_drive = true 2025-11-08 12:59:33.906591 | orchestrator | 12:59:33.905 STDOUT terraform:  + created = (known after apply) 2025-11-08 12:59:33.906595 | orchestrator | 12:59:33.905 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-08 12:59:33.906599 | orchestrator | 12:59:33.905 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-08 12:59:33.906602 | orchestrator | 12:59:33.905 STDOUT terraform:  + force_delete = false 2025-11-08 12:59:33.906606 | orchestrator | 12:59:33.905 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-08 12:59:33.906610 | orchestrator | 12:59:33.905 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.906613 | orchestrator | 12:59:33.905 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.906617 | orchestrator | 12:59:33.905 STDOUT terraform:  + image_name = (known after apply) 2025-11-08 12:59:33.906621 | orchestrator | 12:59:33.905 STDOUT terraform:  + key_pair = "testbed" 2025-11-08 12:59:33.906628 | orchestrator | 12:59:33.905 STDOUT terraform:  + name = "testbed-node-4" 2025-11-08 12:59:33.906631 | orchestrator | 12:59:33.905 STDOUT terraform:  + power_state = "active" 2025-11-08 12:59:33.906635 | orchestrator | 12:59:33.905 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.906639 | orchestrator | 12:59:33.906 STDOUT terraform:  + security_groups = (known after apply) 2025-11-08 12:59:33.908302 | orchestrator | 12:59:33.906 STDOUT terraform:  + stop_before_destroy = false 2025-11-08 12:59:33.908592 | orchestrator | 12:59:33.908 STDOUT terraform:  + updated = (known after apply) 2025-11-08 12:59:33.908869 | orchestrator | 12:59:33.908 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-08 12:59:33.908906 | orchestrator | 12:59:33.908 STDOUT terraform:  + block_device { 2025-11-08 12:59:33.908975 | orchestrator | 12:59:33.908 STDOUT terraform:  + boot_index = 0 2025-11-08 12:59:33.909033 | orchestrator | 12:59:33.908 STDOUT terraform:  + delete_on_termination = false 2025-11-08 12:59:33.909106 | orchestrator | 12:59:33.909 STDOUT terraform:  + destination_type = "volume" 2025-11-08 12:59:33.909209 | orchestrator | 12:59:33.909 STDOUT terraform:  + multiattach = false 2025-11-08 12:59:33.909251 | orchestrator | 12:59:33.909 STDOUT terraform:  + source_type = "volume" 2025-11-08 12:59:33.909355 | orchestrator | 12:59:33.909 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.909365 | orchestrator | 12:59:33.909 STDOUT terraform:  } 2025-11-08 12:59:33.909389 | orchestrator | 12:59:33.909 STDOUT terraform:  + network { 2025-11-08 12:59:33.909466 | orchestrator | 12:59:33.909 STDOUT terraform:  + access_network = false 2025-11-08 12:59:33.909523 | orchestrator | 12:59:33.909 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-08 12:59:33.909574 | orchestrator | 12:59:33.909 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-08 12:59:33.909673 | orchestrator | 12:59:33.909 STDOUT terraform:  + mac = (known after apply) 2025-11-08 12:59:33.909868 | orchestrator | 12:59:33.909 STDOUT terraform:  + name = (known after apply) 2025-11-08 12:59:33.909897 | orchestrator | 12:59:33.909 STDOUT terraform:  + port = (known after apply) 2025-11-08 12:59:33.909970 | orchestrator | 12:59:33.909 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.909996 | orchestrator | 12:59:33.909 STDOUT terraform:  } 2025-11-08 12:59:33.910043 | orchestrator | 12:59:33.910 STDOUT terraform:  } 2025-11-08 12:59:33.910372 | orchestrator | 12:59:33.910 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-11-08 12:59:33.910516 | orchestrator | 12:59:33.910 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-08 12:59:33.910751 | orchestrator | 12:59:33.910 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-08 12:59:33.910850 | orchestrator | 12:59:33.910 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-08 12:59:33.911167 | orchestrator | 12:59:33.910 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-08 12:59:33.911278 | orchestrator | 12:59:33.911 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.911358 | orchestrator | 12:59:33.911 STDOUT terraform:  + availability_zone = "nova" 2025-11-08 12:59:33.911416 | orchestrator | 12:59:33.911 STDOUT terraform:  + config_drive = true 2025-11-08 12:59:33.911496 | orchestrator | 12:59:33.911 STDOUT terraform:  + created = (known after apply) 2025-11-08 12:59:33.911850 | orchestrator | 12:59:33.911 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-08 12:59:33.912025 | orchestrator | 12:59:33.911 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-08 12:59:33.912120 | orchestrator | 12:59:33.912 STDOUT terraform:  + force_delete = false 2025-11-08 12:59:33.912222 | orchestrator | 12:59:33.912 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-08 12:59:33.912313 | orchestrator | 12:59:33.912 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.912403 | orchestrator | 12:59:33.912 STDOUT terraform:  + image_id = (known after apply) 2025-11-08 12:59:33.912552 | orchestrator | 12:59:33.912 STDOUT terraform:  + image_name = (known after apply) 2025-11-08 12:59:33.912635 | orchestrator | 12:59:33.912 STDOUT terraform:  + key_pair = "testbed" 2025-11-08 12:59:33.912759 | orchestrator | 12:59:33.912 STDOUT terraform:  + name = "testbed-node-5" 2025-11-08 12:59:33.912845 | orchestrator | 12:59:33.912 STDOUT terraform:  + power_state = "active" 2025-11-08 12:59:33.912926 | orchestrator | 12:59:33.912 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.913066 | orchestrator | 12:59:33.912 STDOUT terraform:  + security_groups = (known after apply) 2025-11-08 12:59:33.913155 | orchestrator | 12:59:33.913 STDOUT terraform:  + stop_before_destroy = false 2025-11-08 12:59:33.913359 | orchestrator | 12:59:33.913 STDOUT terraform:  + updated = (known after apply) 2025-11-08 12:59:33.913536 | orchestrator | 12:59:33.913 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-08 12:59:33.913614 | orchestrator | 12:59:33.913 STDOUT terraform:  + block_device { 2025-11-08 12:59:33.913682 | orchestrator | 12:59:33.913 STDOUT terraform:  + boot_index = 0 2025-11-08 12:59:33.913724 | orchestrator | 12:59:33.913 STDOUT terraform:  + delete_on_termination = false 2025-11-08 12:59:33.913779 | orchestrator | 12:59:33.913 STDOUT terraform:  + destination_type = "volume" 2025-11-08 12:59:33.918410 | orchestrator | 12:59:33.913 STDOUT terraform:  + multiattach = false 2025-11-08 12:59:33.918528 | orchestrator | 12:59:33.918 STDOUT terraform:  + source_type = "volume" 2025-11-08 12:59:33.918774 | orchestrator | 12:59:33.918 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.918963 | orchestrator | 12:59:33.918 STDOUT terraform:  } 2025-11-08 12:59:33.918999 | orchestrator | 12:59:33.918 STDOUT terraform:  + network { 2025-11-08 12:59:33.919277 | orchestrator | 12:59:33.918 STDOUT terraform:  + access_network = false 2025-11-08 12:59:33.919481 | orchestrator | 12:59:33.919 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-08 12:59:33.919741 | orchestrator | 12:59:33.919 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-08 12:59:33.919909 | orchestrator | 12:59:33.919 STDOUT terraform:  + mac = (known after apply) 2025-11-08 12:59:33.920174 | orchestrator | 12:59:33.920 STDOUT terraform:  + name = (known after apply) 2025-11-08 12:59:33.920489 | orchestrator | 12:59:33.920 STDOUT terraform:  + port = (known after apply) 2025-11-08 12:59:33.920846 | orchestrator | 12:59:33.920 STDOUT terraform:  + uuid = (known after apply) 2025-11-08 12:59:33.921002 | orchestrator | 12:59:33.920 STDOUT terraform:  } 2025-11-08 12:59:33.921127 | orchestrator | 12:59:33.920 STDOUT terraform:  } 2025-11-08 12:59:33.921429 | orchestrator | 12:59:33.921 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-11-08 12:59:33.921588 | orchestrator | 12:59:33.921 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-11-08 12:59:33.922058 | orchestrator | 12:59:33.921 STDOUT terraform:  + fingerprint = (known after apply) 2025-11-08 12:59:33.922227 | orchestrator | 12:59:33.922 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.922452 | orchestrator | 12:59:33.922 STDOUT terraform:  + name = "testbed" 2025-11-08 12:59:33.922738 | orchestrator | 12:59:33.922 STDOUT terraform:  + private_key = (sensitive value) 2025-11-08 12:59:33.922959 | orchestrator | 12:59:33.922 STDOUT terraform:  + public_key = (known after apply) 2025-11-08 12:59:33.923431 | orchestrator | 12:59:33.922 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.923649 | orchestrator | 12:59:33.923 STDOUT terraform:  + user_id = (known after apply) 2025-11-08 12:59:33.923781 | orchestrator | 12:59:33.923 STDOUT terraform:  } 2025-11-08 12:59:33.924198 | orchestrator | 12:59:33.923 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-11-08 12:59:33.924591 | orchestrator | 12:59:33.924 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-08 12:59:33.924787 | orchestrator | 12:59:33.924 STDOUT terraform:  + device = (known after apply) 2025-11-08 12:59:33.925152 | orchestrator | 12:59:33.924 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.928487 | orchestrator | 12:59:33.928 STDOUT terraform:  + instance_id = (known after apply) 2025-11-08 12:59:33.928519 | orchestrator | 12:59:33.928 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.928535 | orchestrator | 12:59:33.928 STDOUT terraform:  + volume_id = (known after apply) 2025-11-08 12:59:33.928554 | orchestrator | 12:59:33.928 STDOUT terraform:  } 2025-11-08 12:59:33.928624 | orchestrator | 12:59:33.928 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-11-08 12:59:33.928671 | orchestrator | 12:59:33.928 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-08 12:59:33.928697 | orchestrator | 12:59:33.928 STDOUT terraform:  + device = (known after apply) 2025-11-08 12:59:33.928727 | orchestrator | 12:59:33.928 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.928757 | orchestrator | 12:59:33.928 STDOUT terraform:  + instance_id = (known after apply) 2025-11-08 12:59:33.928783 | orchestrator | 12:59:33.928 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.928811 | orchestrator | 12:59:33.928 STDOUT terraform:  + volume_id = (known after apply) 2025-11-08 12:59:33.928817 | orchestrator | 12:59:33.928 STDOUT terraform:  } 2025-11-08 12:59:33.928868 | orchestrator | 12:59:33.928 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-11-08 12:59:33.928914 | orchestrator | 12:59:33.928 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-08 12:59:33.928977 | orchestrator | 12:59:33.928 STDOUT terraform:  + device = (known after apply) 2025-11-08 12:59:33.929007 | orchestrator | 12:59:33.928 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.929035 | orchestrator | 12:59:33.929 STDOUT terraform:  + instance_id = (known after apply) 2025-11-08 12:59:33.929063 | orchestrator | 12:59:33.929 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.929090 | orchestrator | 12:59:33.929 STDOUT terraform:  + volume_id = (known after apply) 2025-11-08 12:59:33.929097 | orchestrator | 12:59:33.929 STDOUT terraform:  } 2025-11-08 12:59:33.929148 | orchestrator | 12:59:33.929 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-11-08 12:59:33.929193 | orchestrator | 12:59:33.929 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-08 12:59:33.929221 | orchestrator | 12:59:33.929 STDOUT terraform:  + device = (known after apply) 2025-11-08 12:59:33.929249 | orchestrator | 12:59:33.929 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.929277 | orchestrator | 12:59:33.929 STDOUT terraform:  + instance_id = (known after apply) 2025-11-08 12:59:33.929307 | orchestrator | 12:59:33.929 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.929335 | orchestrator | 12:59:33.929 STDOUT terraform:  + volume_id = (known after apply) 2025-11-08 12:59:33.929342 | orchestrator | 12:59:33.929 STDOUT terraform:  } 2025-11-08 12:59:33.929391 | orchestrator | 12:59:33.929 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-11-08 12:59:33.929439 | orchestrator | 12:59:33.929 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-08 12:59:33.929466 | orchestrator | 12:59:33.929 STDOUT terraform:  + device = (known after apply) 2025-11-08 12:59:33.929494 | orchestrator | 12:59:33.929 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.929521 | orchestrator | 12:59:33.929 STDOUT terraform:  + instance_id = (known after apply) 2025-11-08 12:59:33.929548 | orchestrator | 12:59:33.929 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.929574 | orchestrator | 12:59:33.929 STDOUT terraform:  + volume_id = (known after apply) 2025-11-08 12:59:33.929581 | orchestrator | 12:59:33.929 STDOUT terraform:  } 2025-11-08 12:59:33.929637 | orchestrator | 12:59:33.929 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-11-08 12:59:33.929683 | orchestrator | 12:59:33.929 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-08 12:59:33.929709 | orchestrator | 12:59:33.929 STDOUT terraform:  + device = (known after apply) 2025-11-08 12:59:33.929736 | orchestrator | 12:59:33.929 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.929763 | orchestrator | 12:59:33.929 STDOUT terraform:  + instance_id = (known after apply) 2025-11-08 12:59:33.929794 | orchestrator | 12:59:33.929 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.929817 | orchestrator | 12:59:33.929 STDOUT terraform:  + volume_id = (known after apply) 2025-11-08 12:59:33.929824 | orchestrator | 12:59:33.929 STDOUT terraform:  } 2025-11-08 12:59:33.929875 | orchestrator | 12:59:33.929 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-11-08 12:59:33.929957 | orchestrator | 12:59:33.929 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-08 12:59:33.929968 | orchestrator | 12:59:33.929 STDOUT terraform:  + device = (known after apply) 2025-11-08 12:59:33.929977 | orchestrator | 12:59:33.929 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.930005 | orchestrator | 12:59:33.929 STDOUT terraform:  + instance_id = (known after apply) 2025-11-08 12:59:33.930075 | orchestrator | 12:59:33.930 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.930101 | orchestrator | 12:59:33.930 STDOUT terraform:  + volume_id = (known after apply) 2025-11-08 12:59:33.930110 | orchestrator | 12:59:33.930 STDOUT terraform:  } 2025-11-08 12:59:33.930188 | orchestrator | 12:59:33.930 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-11-08 12:59:33.930250 | orchestrator | 12:59:33.930 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-08 12:59:33.930416 | orchestrator | 12:59:33.930 STDOUT terraform:  + device = (known after apply) 2025-11-08 12:59:33.930565 | orchestrator | 12:59:33.930 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.930692 | orchestrator | 12:59:33.930 STDOUT terraform:  + instance_id = (known after apply) 2025-11-08 12:59:33.930765 | orchestrator | 12:59:33.930 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.930854 | orchestrator | 12:59:33.930 STDOUT terraform:  + volume_id = (known after apply) 2025-11-08 12:59:33.930885 | orchestrator | 12:59:33.930 STDOUT terraform:  } 2025-11-08 12:59:33.931149 | orchestrator | 12:59:33.930 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-11-08 12:59:33.931248 | orchestrator | 12:59:33.931 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-08 12:59:33.931277 | orchestrator | 12:59:33.931 STDOUT terraform:  + device = (known after apply) 2025-11-08 12:59:33.931355 | orchestrator | 12:59:33.931 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.931565 | orchestrator | 12:59:33.931 STDOUT terraform:  + instance_id = (known after apply) 2025-11-08 12:59:33.931799 | orchestrator | 12:59:33.931 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.932084 | orchestrator | 12:59:33.931 STDOUT terraform:  + volume_id = (known after apply) 2025-11-08 12:59:33.932169 | orchestrator | 12:59:33.931 STDOUT terraform:  } 2025-11-08 12:59:33.932209 | orchestrator | 12:59:33.932 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-11-08 12:59:33.932382 | orchestrator | 12:59:33.932 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-11-08 12:59:33.932631 | orchestrator | 12:59:33.932 STDOUT terraform:  + fixed_ip = (known after apply) 2025-11-08 12:59:33.932723 | orchestrator | 12:59:33.932 STDOUT terraform:  + floating_ip = (known after apply) 2025-11-08 12:59:33.932868 | orchestrator | 12:59:33.932 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.932965 | orchestrator | 12:59:33.932 STDOUT terraform:  + port_id = (known after apply) 2025-11-08 12:59:33.933038 | orchestrator | 12:59:33.932 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.933055 | orchestrator | 12:59:33.933 STDOUT terraform:  } 2025-11-08 12:59:33.933166 | orchestrator | 12:59:33.933 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-11-08 12:59:33.933297 | orchestrator | 12:59:33.933 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-11-08 12:59:33.933914 | orchestrator | 12:59:33.933 STDOUT terraform:  + address = (known after apply) 2025-11-08 12:59:33.934077 | orchestrator | 12:59:33.933 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.934482 | orchestrator | 12:59:33.934 STDOUT terraform:  + dns_domain = (known after apply) 2025-11-08 12:59:33.934605 | orchestrator | 12:59:33.934 STDOUT terraform:  + dns_name = (known after apply) 2025-11-08 12:59:33.934973 | orchestrator | 12:59:33.934 STDOUT terraform:  + fixed_ip = (known after apply) 2025-11-08 12:59:33.935146 | orchestrator | 12:59:33.935 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.935469 | orchestrator | 12:59:33.935 STDOUT terraform:  + pool = "public" 2025-11-08 12:59:33.935623 | orchestrator | 12:59:33.935 STDOUT terraform:  + port_id = (known after apply) 2025-11-08 12:59:33.935820 | orchestrator | 12:59:33.935 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.935987 | orchestrator | 12:59:33.935 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-08 12:59:33.936135 | orchestrator | 12:59:33.935 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.936430 | orchestrator | 12:59:33.936 STDOUT terraform:  } 2025-11-08 12:59:33.936644 | orchestrator | 12:59:33.936 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-11-08 12:59:33.936959 | orchestrator | 12:59:33.936 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-11-08 12:59:33.937551 | orchestrator | 12:59:33.937 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-08 12:59:33.938494 | orchestrator | 12:59:33.937 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.938605 | orchestrator | 12:59:33.938 STDOUT terraform:  + availability_zone_hints = [ 2025-11-08 12:59:33.938643 | orchestrator | 12:59:33.938 STDOUT terraform:  + "nova", 2025-11-08 12:59:33.938657 | orchestrator | 12:59:33.938 STDOUT terraform:  ] 2025-11-08 12:59:33.938750 | orchestrator | 12:59:33.938 STDOUT terraform:  + dns_domain = (known after apply) 2025-11-08 12:59:33.938823 | orchestrator | 12:59:33.938 STDOUT terraform:  + external = (known after apply) 2025-11-08 12:59:33.938963 | orchestrator | 12:59:33.938 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.939148 | orchestrator | 12:59:33.938 STDOUT terraform:  + mtu = (known after apply) 2025-11-08 12:59:33.939457 | orchestrator | 12:59:33.939 STDOUT terraform:  + name = "net-testbed-management" 2025-11-08 12:59:33.939598 | orchestrator | 12:59:33.939 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-08 12:59:33.939734 | orchestrator | 12:59:33.939 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-08 12:59:33.939811 | orchestrator | 12:59:33.939 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.939983 | orchestrator | 12:59:33.939 STDOUT terraform:  + shared = (known after apply) 2025-11-08 12:59:33.940186 | orchestrator | 12:59:33.939 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.940399 | orchestrator | 12:59:33.940 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-11-08 12:59:33.940595 | orchestrator | 12:59:33.940 STDOUT terraform:  + segments (known after apply) 2025-11-08 12:59:33.940788 | orchestrator | 12:59:33.940 STDOUT terraform:  } 2025-11-08 12:59:33.940843 | orchestrator | 12:59:33.940 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-11-08 12:59:33.940892 | orchestrator | 12:59:33.940 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-11-08 12:59:33.940948 | orchestrator | 12:59:33.940 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-08 12:59:33.940961 | orchestrator | 12:59:33.940 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-08 12:59:33.941004 | orchestrator | 12:59:33.940 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-08 12:59:33.941049 | orchestrator | 12:59:33.940 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.941077 | orchestrator | 12:59:33.941 STDOUT terraform:  + device_id = (known after apply) 2025-11-08 12:59:33.941112 | orchestrator | 12:59:33.941 STDOUT terraform:  + device_owner = (known after apply) 2025-11-08 12:59:33.941147 | orchestrator | 12:59:33.941 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-08 12:59:33.941181 | orchestrator | 12:59:33.941 STDOUT terraform:  + dns_name = (known after apply) 2025-11-08 12:59:33.941226 | orchestrator | 12:59:33.941 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.942060 | orchestrator | 12:59:33.941 STDOUT terraform:  + mac_address = (known after apply) 2025-11-08 12:59:33.942078 | orchestrator | 12:59:33.941 STDOUT terraform:  + network_id = (known after apply) 2025-11-08 12:59:33.942086 | orchestrator | 12:59:33.941 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-08 12:59:33.942094 | orchestrator | 12:59:33.941 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-08 12:59:33.942101 | orchestrator | 12:59:33.941 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.942123 | orchestrator | 12:59:33.941 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-08 12:59:33.942131 | orchestrator | 12:59:33.941 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.942139 | orchestrator | 12:59:33.941 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.942147 | orchestrator | 12:59:33.941 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-08 12:59:33.942155 | orchestrator | 12:59:33.941 STDOUT terraform:  } 2025-11-08 12:59:33.942163 | orchestrator | 12:59:33.941 STDOUT terraform:  + binding (known after apply) 2025-11-08 12:59:33.942171 | orchestrator | 12:59:33.941 STDOUT terraform:  + fixed_ip { 2025-11-08 12:59:33.942178 | orchestrator | 12:59:33.941 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-11-08 12:59:33.942186 | orchestrator | 12:59:33.941 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-08 12:59:33.942194 | orchestrator | 12:59:33.941 STDOUT terraform:  } 2025-11-08 12:59:33.942202 | orchestrator | 12:59:33.941 STDOUT terraform:  } 2025-11-08 12:59:33.942210 | orchestrator | 12:59:33.941 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-11-08 12:59:33.942218 | orchestrator | 12:59:33.941 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-08 12:59:33.942225 | orchestrator | 12:59:33.941 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-08 12:59:33.942233 | orchestrator | 12:59:33.941 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-08 12:59:33.942241 | orchestrator | 12:59:33.941 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-08 12:59:33.942249 | orchestrator | 12:59:33.941 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.942256 | orchestrator | 12:59:33.941 STDOUT terraform:  + device_id = (known after apply) 2025-11-08 12:59:33.942264 | orchestrator | 12:59:33.941 STDOUT terraform:  + device_owner = (known after apply) 2025-11-08 12:59:33.942272 | orchestrator | 12:59:33.941 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-08 12:59:33.942279 | orchestrator | 12:59:33.941 STDOUT terraform:  + dns_name = (known after apply) 2025-11-08 12:59:33.942287 | orchestrator | 12:59:33.941 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.942295 | orchestrator | 12:59:33.941 STDOUT terraform:  + mac_address = (known after apply) 2025-11-08 12:59:33.942303 | orchestrator | 12:59:33.941 STDOUT terraform:  + network_id = (known after apply) 2025-11-08 12:59:33.942314 | orchestrator | 12:59:33.942 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-08 12:59:33.942322 | orchestrator | 12:59:33.942 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-08 12:59:33.942330 | orchestrator | 12:59:33.942 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.942338 | orchestrator | 12:59:33.942 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-08 12:59:33.942346 | orchestrator | 12:59:33.942 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.942353 | orchestrator | 12:59:33.942 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.942367 | orchestrator | 12:59:33.942 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-08 12:59:33.942375 | orchestrator | 12:59:33.942 STDOUT terraform:  } 2025-11-08 12:59:33.942383 | orchestrator | 12:59:33.942 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.942391 | orchestrator | 12:59:33.942 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-08 12:59:33.942399 | orchestrator | 12:59:33.942 STDOUT terraform:  } 2025-11-08 12:59:33.942406 | orchestrator | 12:59:33.942 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.942417 | orchestrator | 12:59:33.942 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-08 12:59:33.942425 | orchestrator | 12:59:33.942 STDOUT terraform:  } 2025-11-08 12:59:33.942433 | orchestrator | 12:59:33.942 STDOUT terraform:  + binding (known after apply) 2025-11-08 12:59:33.942441 | orchestrator | 12:59:33.942 STDOUT terraform:  + fixed_ip { 2025-11-08 12:59:33.942448 | orchestrator | 12:59:33.942 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-11-08 12:59:33.942456 | orchestrator | 12:59:33.942 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-08 12:59:33.942464 | orchestrator | 12:59:33.942 STDOUT terraform:  } 2025-11-08 12:59:33.942475 | orchestrator | 12:59:33.942 STDOUT terraform:  } 2025-11-08 12:59:33.942483 | orchestrator | 12:59:33.942 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-11-08 12:59:33.942517 | orchestrator | 12:59:33.942 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-08 12:59:33.942552 | orchestrator | 12:59:33.942 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-08 12:59:33.942589 | orchestrator | 12:59:33.942 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-08 12:59:33.942625 | orchestrator | 12:59:33.942 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-08 12:59:33.942659 | orchestrator | 12:59:33.942 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.942701 | orchestrator | 12:59:33.942 STDOUT terraform:  + device_id = (known after apply) 2025-11-08 12:59:33.942730 | orchestrator | 12:59:33.942 STDOUT terraform:  + device_owner = (known after apply) 2025-11-08 12:59:33.942764 | orchestrator | 12:59:33.942 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-08 12:59:33.942799 | orchestrator | 12:59:33.942 STDOUT terraform:  + dns_name = (known after apply) 2025-11-08 12:59:33.942834 | orchestrator | 12:59:33.942 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.942868 | orchestrator | 12:59:33.942 STDOUT terraform:  + mac_address = (known after apply) 2025-11-08 12:59:33.942903 | orchestrator | 12:59:33.942 STDOUT terraform:  + network_id = (known after apply) 2025-11-08 12:59:33.942980 | orchestrator | 12:59:33.942 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-08 12:59:33.942991 | orchestrator | 12:59:33.942 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-08 12:59:33.943003 | orchestrator | 12:59:33.942 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.943362 | orchestrator | 12:59:33.943 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-08 12:59:33.943375 | orchestrator | 12:59:33.943 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.943383 | orchestrator | 12:59:33.943 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.943391 | orchestrator | 12:59:33.943 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-08 12:59:33.943399 | orchestrator | 12:59:33.943 STDOUT terraform:  } 2025-11-08 12:59:33.943412 | orchestrator | 12:59:33.943 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.943420 | orchestrator | 12:59:33.943 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-08 12:59:33.943428 | orchestrator | 12:59:33.943 STDOUT terraform:  } 2025-11-08 12:59:33.943435 | orchestrator | 12:59:33.943 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.943443 | orchestrator | 12:59:33.943 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-08 12:59:33.943451 | orchestrator | 12:59:33.943 STDOUT terraform:  } 2025-11-08 12:59:33.943459 | orchestrator | 12:59:33.943 STDOUT terraform:  + binding (known after apply) 2025-11-08 12:59:33.943466 | orchestrator | 12:59:33.943 STDOUT terraform:  + fixed_ip { 2025-11-08 12:59:33.943474 | orchestrator | 12:59:33.943 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-11-08 12:59:33.943480 | orchestrator | 12:59:33.943 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-08 12:59:33.943487 | orchestrator | 12:59:33.943 STDOUT terraform:  } 2025-11-08 12:59:33.943493 | orchestrator | 12:59:33.943 STDOUT terraform:  } 2025-11-08 12:59:33.943500 | orchestrator | 12:59:33.943 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-11-08 12:59:33.943510 | orchestrator | 12:59:33.943 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-08 12:59:33.943516 | orchestrator | 12:59:33.943 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-08 12:59:33.943523 | orchestrator | 12:59:33.943 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-08 12:59:33.943530 | orchestrator | 12:59:33.943 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-08 12:59:33.943536 | orchestrator | 12:59:33.943 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.943545 | orchestrator | 12:59:33.943 STDOUT terraform:  + device_id = (known after apply) 2025-11-08 12:59:33.943565 | orchestrator | 12:59:33.943 STDOUT terraform:  + device_owner = (known after apply) 2025-11-08 12:59:33.943600 | orchestrator | 12:59:33.943 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-08 12:59:33.943634 | orchestrator | 12:59:33.943 STDOUT terraform:  + dns_name = (known after apply) 2025-11-08 12:59:33.943669 | orchestrator | 12:59:33.943 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.946046 | orchestrator | 12:59:33.943 STDOUT terraform:  + mac_address = (known after apply) 2025-11-08 12:59:33.946062 | orchestrator | 12:59:33.943 STDOUT terraform:  + network_id = (known after apply) 2025-11-08 12:59:33.946077 | orchestrator | 12:59:33.943 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-08 12:59:33.946084 | orchestrator | 12:59:33.943 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-08 12:59:33.946090 | orchestrator | 12:59:33.943 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.946097 | orchestrator | 12:59:33.943 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-08 12:59:33.946103 | orchestrator | 12:59:33.943 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.946122 | orchestrator | 12:59:33.943 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.946129 | orchestrator | 12:59:33.943 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-08 12:59:33.946135 | orchestrator | 12:59:33.943 STDOUT terraform:  } 2025-11-08 12:59:33.946141 | orchestrator | 12:59:33.943 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.946147 | orchestrator | 12:59:33.943 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-08 12:59:33.946154 | orchestrator | 12:59:33.943 STDOUT terraform:  } 2025-11-08 12:59:33.946160 | orchestrator | 12:59:33.944 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.946166 | orchestrator | 12:59:33.944 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-08 12:59:33.946172 | orchestrator | 12:59:33.944 STDOUT terraform:  } 2025-11-08 12:59:33.946182 | orchestrator | 12:59:33.944 STDOUT terraform:  + binding (known after apply) 2025-11-08 12:59:33.946188 | orchestrator | 12:59:33.944 STDOUT terraform:  + fixed_ip { 2025-11-08 12:59:33.946194 | orchestrator | 12:59:33.944 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-11-08 12:59:33.946200 | orchestrator | 12:59:33.944 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-08 12:59:33.946206 | orchestrator | 12:59:33.944 STDOUT terraform:  } 2025-11-08 12:59:33.946212 | orchestrator | 12:59:33.944 STDOUT terraform:  } 2025-11-08 12:59:33.946218 | orchestrator | 12:59:33.944 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-11-08 12:59:33.946225 | orchestrator | 12:59:33.944 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-08 12:59:33.946231 | orchestrator | 12:59:33.944 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-08 12:59:33.946237 | orchestrator | 12:59:33.944 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-08 12:59:33.946243 | orchestrator | 12:59:33.944 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-08 12:59:33.946249 | orchestrator | 12:59:33.944 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.946255 | orchestrator | 12:59:33.944 STDOUT terraform:  + device_id = (known after apply) 2025-11-08 12:59:33.946261 | orchestrator | 12:59:33.944 STDOUT terraform:  + device_owner = (known after apply) 2025-11-08 12:59:33.946267 | orchestrator | 12:59:33.944 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-08 12:59:33.946273 | orchestrator | 12:59:33.944 STDOUT terraform:  + dns_name = (known after apply) 2025-11-08 12:59:33.946280 | orchestrator | 12:59:33.944 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.946290 | orchestrator | 12:59:33.944 STDOUT terraform:  + mac_address = (known after apply) 2025-11-08 12:59:33.946296 | orchestrator | 12:59:33.944 STDOUT terraform:  + network_id = (known after apply) 2025-11-08 12:59:33.946302 | orchestrator | 12:59:33.944 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-08 12:59:33.946308 | orchestrator | 12:59:33.944 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-08 12:59:33.946322 | orchestrator | 12:59:33.944 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.946328 | orchestrator | 12:59:33.944 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-08 12:59:33.946334 | orchestrator | 12:59:33.944 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.946340 | orchestrator | 12:59:33.944 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.946347 | orchestrator | 12:59:33.944 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-08 12:59:33.946353 | orchestrator | 12:59:33.944 STDOUT terraform:  } 2025-11-08 12:59:33.946359 | orchestrator | 12:59:33.944 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.946365 | orchestrator | 12:59:33.944 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-08 12:59:33.946371 | orchestrator | 12:59:33.944 STDOUT terraform:  } 2025-11-08 12:59:33.946377 | orchestrator | 12:59:33.944 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.946383 | orchestrator | 12:59:33.944 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-08 12:59:33.946389 | orchestrator | 12:59:33.944 STDOUT terraform:  } 2025-11-08 12:59:33.946396 | orchestrator | 12:59:33.944 STDOUT terraform:  + binding (known after apply) 2025-11-08 12:59:33.946402 | orchestrator | 12:59:33.944 STDOUT terraform:  + fixed_ip { 2025-11-08 12:59:33.946408 | orchestrator | 12:59:33.944 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-11-08 12:59:33.946414 | orchestrator | 12:59:33.944 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-08 12:59:33.946420 | orchestrator | 12:59:33.944 STDOUT terraform:  } 2025-11-08 12:59:33.946426 | orchestrator | 12:59:33.944 STDOUT terraform:  } 2025-11-08 12:59:33.946432 | orchestrator | 12:59:33.944 STDOUT terraform:  2025-11-08 12:59:33.946438 | orchestrator | 12:59:33.945 STDOUT terraform: # openstack_networking_port_v2.node_port_management[4] will be created 2025-11-08 12:59:33.946445 | orchestrator | 12:59:33.945 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-08 12:59:33.946451 | orchestrator | 12:59:33.945 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-08 12:59:33.946457 | orchestrator | 12:59:33.945 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-08 12:59:33.946463 | orchestrator | 12:59:33.945 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-08 12:59:33.946469 | orchestrator | 12:59:33.945 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.946475 | orchestrator | 12:59:33.945 STDOUT terraform:  + device_id = (known after apply) 2025-11-08 12:59:33.946481 | orchestrator | 12:59:33.945 STDOUT terraform:  + device_owner = (known after apply) 2025-11-08 12:59:33.946494 | orchestrator | 12:59:33.945 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-08 12:59:33.946500 | orchestrator | 12:59:33.945 STDOUT terraform:  + dns_name = (known after apply) 2025-11-08 12:59:33.946506 | orchestrator | 12:59:33.945 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.946512 | orchestrator | 12:59:33.945 STDOUT terraform:  + mac_address = (known after apply) 2025-11-08 12:59:33.946522 | orchestrator | 12:59:33.945 STDOUT terraform:  + network_id = (known after apply) 2025-11-08 12:59:33.946528 | orchestrator | 12:59:33.945 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-08 12:59:33.946534 | orchestrator | 12:59:33.945 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-08 12:59:33.946540 | orchestrator | 12:59:33.945 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.946546 | orchestrator | 12:59:33.945 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-08 12:59:33.946552 | orchestrator | 12:59:33.945 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.946558 | orchestrator | 12:59:33.945 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.946565 | orchestrator | 12:59:33.945 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-08 12:59:33.946574 | orchestrator | 12:59:33.945 STDOUT terraform:  } 2025-11-08 12:59:33.946580 | orchestrator | 12:59:33.945 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.946587 | orchestrator | 12:59:33.945 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-08 12:59:33.946593 | orchestrator | 12:59:33.945 STDOUT terraform:  } 2025-11-08 12:59:33.946599 | orchestrator | 12:59:33.945 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.946605 | orchestrator | 12:59:33.945 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-08 12:59:33.946611 | orchestrator | 12:59:33.945 STDOUT terraform:  } 2025-11-08 12:59:33.946617 | orchestrator | 12:59:33.945 STDOUT terraform:  + binding (known after apply) 2025-11-08 12:59:33.946623 | orchestrator | 12:59:33.945 STDOUT terraform:  + fixed_ip { 2025-11-08 12:59:33.946629 | orchestrator | 12:59:33.945 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-11-08 12:59:33.946635 | orchestrator | 12:59:33.945 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-08 12:59:33.946641 | orchestrator | 12:59:33.945 STDOUT terraform:  } 2025-11-08 12:59:33.946648 | orchestrator | 12:59:33.945 STDOUT terraform:  } 2025-11-08 12:59:33.946654 | orchestrator | 12:59:33.945 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-11-08 12:59:33.946660 | orchestrator | 12:59:33.945 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-08 12:59:33.946666 | orchestrator | 12:59:33.945 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-08 12:59:33.946672 | orchestrator | 12:59:33.945 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-08 12:59:33.970729 | orchestrator | 12:59:33.946 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-08 12:59:33.970788 | orchestrator | 12:59:33.948 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.970793 | orchestrator | 12:59:33.949 STDOUT terraform:  + device_id = (known after apply) 2025-11-08 12:59:33.970797 | orchestrator | 12:59:33.949 STDOUT terraform:  + device_owner = (known after apply) 2025-11-08 12:59:33.970801 | orchestrator | 12:59:33.949 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-08 12:59:33.970804 | orchestrator | 12:59:33.949 STDOUT terraform:  + dns_name = (known after apply) 2025-11-08 12:59:33.970808 | orchestrator | 12:59:33.949 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.970812 | orchestrator | 12:59:33.949 STDOUT terraform:  + mac_address = (known after apply) 2025-11-08 12:59:33.970816 | orchestrator | 12:59:33.949 STDOUT terraform:  + network_id = (known after apply) 2025-11-08 12:59:33.970819 | orchestrator | 12:59:33.949 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-08 12:59:33.970823 | orchestrator | 12:59:33.949 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-08 12:59:33.970827 | orchestrator | 12:59:33.950 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.970830 | orchestrator | 12:59:33.950 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-08 12:59:33.970834 | orchestrator | 12:59:33.950 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.970838 | orchestrator | 12:59:33.950 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.970842 | orchestrator | 12:59:33.950 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-08 12:59:33.970846 | orchestrator | 12:59:33.950 STDOUT terraform:  } 2025-11-08 12:59:33.970850 | orchestrator | 12:59:33.950 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.970854 | orchestrator | 12:59:33.950 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-08 12:59:33.970858 | orchestrator | 12:59:33.950 STDOUT terraform:  } 2025-11-08 12:59:33.970862 | orchestrator | 12:59:33.950 STDOUT terraform:  + allowed_address_pairs { 2025-11-08 12:59:33.970865 | orchestrator | 12:59:33.950 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-08 12:59:33.970869 | orchestrator | 12:59:33.950 STDOUT terraform:  } 2025-11-08 12:59:33.970873 | orchestrator | 12:59:33.950 STDOUT terraform:  + binding (known after apply) 2025-11-08 12:59:33.970877 | orchestrator | 12:59:33.950 STDOUT terraform:  + fixed_ip { 2025-11-08 12:59:33.970880 | orchestrator | 12:59:33.950 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-11-08 12:59:33.970884 | orchestrator | 12:59:33.951 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-08 12:59:33.970888 | orchestrator | 12:59:33.951 STDOUT terraform:  } 2025-11-08 12:59:33.970892 | orchestrator | 12:59:33.951 STDOUT terraform:  } 2025-11-08 12:59:33.970896 | orchestrator | 12:59:33.951 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-11-08 12:59:33.970900 | orchestrator | 12:59:33.951 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-11-08 12:59:33.970908 | orchestrator | 12:59:33.951 STDOUT terraform:  + force_destroy = false 2025-11-08 12:59:33.970911 | orchestrator | 12:59:33.951 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.970915 | orchestrator | 12:59:33.951 STDOUT terraform:  + port_id = (known after apply) 2025-11-08 12:59:33.970959 | orchestrator | 12:59:33.951 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.970963 | orchestrator | 12:59:33.951 STDOUT terraform:  + router_id = (known after apply) 2025-11-08 12:59:33.970967 | orchestrator | 12:59:33.951 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-08 12:59:33.970971 | orchestrator | 12:59:33.951 STDOUT terraform:  } 2025-11-08 12:59:33.970985 | orchestrator | 12:59:33.951 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-11-08 12:59:33.970992 | orchestrator | 12:59:33.951 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-11-08 12:59:33.970996 | orchestrator | 12:59:33.951 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-08 12:59:33.971000 | orchestrator | 12:59:33.952 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.971004 | orchestrator | 12:59:33.952 STDOUT terraform:  + availability_zone_hints = [ 2025-11-08 12:59:33.971007 | orchestrator | 12:59:33.952 STDOUT terraform:  + "nova", 2025-11-08 12:59:33.971011 | orchestrator | 12:59:33.952 STDOUT terraform:  ] 2025-11-08 12:59:33.971015 | orchestrator | 12:59:33.952 STDOUT terraform:  + distributed = (known after apply) 2025-11-08 12:59:33.971019 | orchestrator | 12:59:33.952 STDOUT terraform:  + enable_snat = (known after apply) 2025-11-08 12:59:33.971022 | orchestrator | 12:59:33.952 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-11-08 12:59:33.971026 | orchestrator | 12:59:33.952 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-11-08 12:59:33.971030 | orchestrator | 12:59:33.952 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971033 | orchestrator | 12:59:33.952 STDOUT terraform:  + name = "testbed" 2025-11-08 12:59:33.971037 | orchestrator | 12:59:33.953 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971041 | orchestrator | 12:59:33.953 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971045 | orchestrator | 12:59:33.953 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-11-08 12:59:33.971048 | orchestrator | 12:59:33.954 STDOUT terraform:  } 2025-11-08 12:59:33.971052 | orchestrator | 12:59:33.954 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-11-08 12:59:33.971057 | orchestrator | 12:59:33.954 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-11-08 12:59:33.971061 | orchestrator | 12:59:33.954 STDOUT terraform:  + description = "ssh" 2025-11-08 12:59:33.971064 | orchestrator | 12:59:33.954 STDOUT terraform:  + direction = "ingress" 2025-11-08 12:59:33.971068 | orchestrator | 12:59:33.954 STDOUT terraform:  + ethertype = "IPv4" 2025-11-08 12:59:33.971072 | orchestrator | 12:59:33.954 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971079 | orchestrator | 12:59:33.954 STDOUT terraform:  + port_range_max = 22 2025-11-08 12:59:33.971083 | orchestrator | 12:59:33.954 STDOUT terraform:  + port_range_min = 22 2025-11-08 12:59:33.971086 | orchestrator | 12:59:33.954 STDOUT terraform:  + protocol = "tcp" 2025-11-08 12:59:33.971090 | orchestrator | 12:59:33.954 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971094 | orchestrator | 12:59:33.954 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-08 12:59:33.971098 | orchestrator | 12:59:33.954 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-08 12:59:33.971102 | orchestrator | 12:59:33.954 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-08 12:59:33.971105 | orchestrator | 12:59:33.954 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-08 12:59:33.971111 | orchestrator | 12:59:33.955 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971115 | orchestrator | 12:59:33.955 STDOUT terraform:  } 2025-11-08 12:59:33.971118 | orchestrator | 12:59:33.955 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-11-08 12:59:33.971122 | orchestrator | 12:59:33.955 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-11-08 12:59:33.971126 | orchestrator | 12:59:33.955 STDOUT terraform:  + description = "wireguard" 2025-11-08 12:59:33.971133 | orchestrator | 12:59:33.955 STDOUT terraform:  + direction = "ingress" 2025-11-08 12:59:33.971137 | orchestrator | 12:59:33.955 STDOUT terraform:  + ethertype = "IPv4" 2025-11-08 12:59:33.971140 | orchestrator | 12:59:33.955 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971144 | orchestrator | 12:59:33.955 STDOUT terraform:  + port_range_max = 51820 2025-11-08 12:59:33.971148 | orchestrator | 12:59:33.955 STDOUT terraform:  + port_range_min = 51820 2025-11-08 12:59:33.971151 | orchestrator | 12:59:33.955 STDOUT terraform:  + protocol = "udp" 2025-11-08 12:59:33.971155 | orchestrator | 12:59:33.955 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971159 | orchestrator | 12:59:33.955 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-08 12:59:33.971163 | orchestrator | 12:59:33.955 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-08 12:59:33.971166 | orchestrator | 12:59:33.955 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-08 12:59:33.971170 | orchestrator | 12:59:33.955 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-08 12:59:33.971174 | orchestrator | 12:59:33.955 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971178 | orchestrator | 12:59:33.955 STDOUT terraform:  } 2025-11-08 12:59:33.971181 | orchestrator | 12:59:33.955 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-11-08 12:59:33.971185 | orchestrator | 12:59:33.955 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-11-08 12:59:33.971192 | orchestrator | 12:59:33.955 STDOUT terraform:  + direction = "ingress" 2025-11-08 12:59:33.971196 | orchestrator | 12:59:33.955 STDOUT terraform:  + ethertype = "IPv4" 2025-11-08 12:59:33.971199 | orchestrator | 12:59:33.955 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971209 | orchestrator | 12:59:33.955 STDOUT terraform:  + protocol = "tcp" 2025-11-08 12:59:33.971213 | orchestrator | 12:59:33.955 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971217 | orchestrator | 12:59:33.955 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-08 12:59:33.971220 | orchestrator | 12:59:33.955 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-08 12:59:33.971224 | orchestrator | 12:59:33.955 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-11-08 12:59:33.971228 | orchestrator | 12:59:33.955 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-08 12:59:33.971232 | orchestrator | 12:59:33.956 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971235 | orchestrator | 12:59:33.956 STDOUT terraform:  } 2025-11-08 12:59:33.971239 | orchestrator | 12:59:33.956 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-11-08 12:59:33.971243 | orchestrator | 12:59:33.956 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-11-08 12:59:33.971246 | orchestrator | 12:59:33.956 STDOUT terraform:  + direction = "ingress" 2025-11-08 12:59:33.971250 | orchestrator | 12:59:33.956 STDOUT terraform:  + ethertype = "IPv4" 2025-11-08 12:59:33.971254 | orchestrator | 12:59:33.956 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971258 | orchestrator | 12:59:33.956 STDOUT terraform:  + protocol = "udp" 2025-11-08 12:59:33.971261 | orchestrator | 12:59:33.956 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971265 | orchestrator | 12:59:33.956 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-08 12:59:33.971269 | orchestrator | 12:59:33.956 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-08 12:59:33.971272 | orchestrator | 12:59:33.956 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-11-08 12:59:33.971279 | orchestrator | 12:59:33.956 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-08 12:59:33.971286 | orchestrator | 12:59:33.956 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971290 | orchestrator | 12:59:33.956 STDOUT terraform:  } 2025-11-08 12:59:33.971293 | orchestrator | 12:59:33.956 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-11-08 12:59:33.971297 | orchestrator | 12:59:33.956 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-11-08 12:59:33.971301 | orchestrator | 12:59:33.956 STDOUT terraform:  + direction = "ingress" 2025-11-08 12:59:33.971304 | orchestrator | 12:59:33.956 STDOUT terraform:  + ethertype = "IPv4" 2025-11-08 12:59:33.971308 | orchestrator | 12:59:33.956 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971315 | orchestrator | 12:59:33.956 STDOUT terraform:  + protocol = "icmp" 2025-11-08 12:59:33.971319 | orchestrator | 12:59:33.956 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971323 | orchestrator | 12:59:33.956 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-08 12:59:33.971326 | orchestrator | 12:59:33.956 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-08 12:59:33.971330 | orchestrator | 12:59:33.956 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-08 12:59:33.971334 | orchestrator | 12:59:33.956 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-08 12:59:33.971337 | orchestrator | 12:59:33.956 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971341 | orchestrator | 12:59:33.957 STDOUT terraform:  } 2025-11-08 12:59:33.971345 | orchestrator | 12:59:33.957 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-11-08 12:59:33.971349 | orchestrator | 12:59:33.957 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-11-08 12:59:33.971352 | orchestrator | 12:59:33.957 STDOUT terraform:  + direction = "ingress" 2025-11-08 12:59:33.971356 | orchestrator | 12:59:33.957 STDOUT terraform:  + ethertype = "IPv4" 2025-11-08 12:59:33.971360 | orchestrator | 12:59:33.957 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971363 | orchestrator | 12:59:33.957 STDOUT terraform:  + protocol = "tcp" 2025-11-08 12:59:33.971367 | orchestrator | 12:59:33.957 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971371 | orchestrator | 12:59:33.957 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-08 12:59:33.971374 | orchestrator | 12:59:33.957 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-08 12:59:33.971378 | orchestrator | 12:59:33.957 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-08 12:59:33.971382 | orchestrator | 12:59:33.957 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-08 12:59:33.971385 | orchestrator | 12:59:33.957 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971394 | orchestrator | 12:59:33.957 STDOUT terraform:  } 2025-11-08 12:59:33.971398 | orchestrator | 12:59:33.957 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-11-08 12:59:33.971402 | orchestrator | 12:59:33.957 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-11-08 12:59:33.971406 | orchestrator | 12:59:33.957 STDOUT terraform:  + direction = "ingress" 2025-11-08 12:59:33.971409 | orchestrator | 12:59:33.957 STDOUT terraform:  + ethertype = "IPv4" 2025-11-08 12:59:33.971413 | orchestrator | 12:59:33.957 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971417 | orchestrator | 12:59:33.957 STDOUT terraform:  + protocol = "udp" 2025-11-08 12:59:33.971423 | orchestrator | 12:59:33.957 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971432 | orchestrator | 12:59:33.957 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-08 12:59:33.971436 | orchestrator | 12:59:33.957 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-08 12:59:33.971439 | orchestrator | 12:59:33.957 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-08 12:59:33.971443 | orchestrator | 12:59:33.957 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-08 12:59:33.971447 | orchestrator | 12:59:33.957 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971451 | orchestrator | 12:59:33.957 STDOUT terraform:  } 2025-11-08 12:59:33.971454 | orchestrator | 12:59:33.957 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-11-08 12:59:33.971458 | orchestrator | 12:59:33.957 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-11-08 12:59:33.971462 | orchestrator | 12:59:33.957 STDOUT terraform:  + direction = "ingress" 2025-11-08 12:59:33.971466 | orchestrator | 12:59:33.957 STDOUT terraform:  + ethertype = "IPv4" 2025-11-08 12:59:33.971469 | orchestrator | 12:59:33.957 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971473 | orchestrator | 12:59:33.962 STDOUT terraform:  + protocol = "icmp" 2025-11-08 12:59:33.971477 | orchestrator | 12:59:33.962 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971480 | orchestrator | 12:59:33.962 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-08 12:59:33.971484 | orchestrator | 12:59:33.962 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-08 12:59:33.971488 | orchestrator | 12:59:33.962 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-08 12:59:33.971492 | orchestrator | 12:59:33.962 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-08 12:59:33.971495 | orchestrator | 12:59:33.962 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971499 | orchestrator | 12:59:33.962 STDOUT terraform:  } 2025-11-08 12:59:33.971503 | orchestrator | 12:59:33.962 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-11-08 12:59:33.971506 | orchestrator | 12:59:33.962 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-11-08 12:59:33.971510 | orchestrator | 12:59:33.962 STDOUT terraform:  + description = "vrrp" 2025-11-08 12:59:33.971514 | orchestrator | 12:59:33.962 STDOUT terraform:  + direction = "ingress" 2025-11-08 12:59:33.971518 | orchestrator | 12:59:33.962 STDOUT terraform:  + ethertype = "IPv4" 2025-11-08 12:59:33.971521 | orchestrator | 12:59:33.962 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971525 | orchestrator | 12:59:33.962 STDOUT terraform:  + protocol = "112" 2025-11-08 12:59:33.971529 | orchestrator | 12:59:33.962 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971533 | orchestrator | 12:59:33.962 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-08 12:59:33.971539 | orchestrator | 12:59:33.962 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-08 12:59:33.971543 | orchestrator | 12:59:33.962 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-08 12:59:33.971547 | orchestrator | 12:59:33.962 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-08 12:59:33.971550 | orchestrator | 12:59:33.962 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971554 | orchestrator | 12:59:33.962 STDOUT terraform:  } 2025-11-08 12:59:33.971558 | orchestrator | 12:59:33.962 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-11-08 12:59:33.971562 | orchestrator | 12:59:33.962 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-11-08 12:59:33.971568 | orchestrator | 12:59:33.962 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.971572 | orchestrator | 12:59:33.962 STDOUT terraform:  + description = "management security group" 2025-11-08 12:59:33.971576 | orchestrator | 12:59:33.962 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971579 | orchestrator | 12:59:33.962 STDOUT terraform:  + name = "testbed-management" 2025-11-08 12:59:33.971583 | orchestrator | 12:59:33.962 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971587 | orchestrator | 12:59:33.962 STDOUT terraform:  + stateful = (known after apply) 2025-11-08 12:59:33.971590 | orchestrator | 12:59:33.963 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971594 | orchestrator | 12:59:33.963 STDOUT terraform:  } 2025-11-08 12:59:33.971598 | orchestrator | 12:59:33.963 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-11-08 12:59:33.971602 | orchestrator | 12:59:33.963 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-11-08 12:59:33.971605 | orchestrator | 12:59:33.963 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.971609 | orchestrator | 12:59:33.963 STDOUT terraform:  + description = "node security group" 2025-11-08 12:59:33.971613 | orchestrator | 12:59:33.963 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971616 | orchestrator | 12:59:33.963 STDOUT terraform:  + name = "testbed-node" 2025-11-08 12:59:33.971620 | orchestrator | 12:59:33.963 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971624 | orchestrator | 12:59:33.963 STDOUT terraform:  + stateful = (known after apply) 2025-11-08 12:59:33.971627 | orchestrator | 12:59:33.963 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971631 | orchestrator | 12:59:33.963 STDOUT terraform:  } 2025-11-08 12:59:33.971635 | orchestrator | 12:59:33.963 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-11-08 12:59:33.971639 | orchestrator | 12:59:33.963 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-11-08 12:59:33.971642 | orchestrator | 12:59:33.963 STDOUT terraform:  + all_tags = (known after apply) 2025-11-08 12:59:33.971646 | orchestrator | 12:59:33.963 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-11-08 12:59:33.971650 | orchestrator | 12:59:33.963 STDOUT terraform:  + dns_nameservers = [ 2025-11-08 12:59:33.971657 | orchestrator | 12:59:33.963 STDOUT terraform:  + "8.8.8.8", 2025-11-08 12:59:33.971661 | orchestrator | 12:59:33.963 STDOUT terraform:  + "9.9.9.9", 2025-11-08 12:59:33.971665 | orchestrator | 12:59:33.963 STDOUT terraform:  ] 2025-11-08 12:59:33.971668 | orchestrator | 12:59:33.963 STDOUT terraform:  + enable_dhcp = true 2025-11-08 12:59:33.971672 | orchestrator | 12:59:33.963 STDOUT terraform:  + gateway_ip = (known after apply) 2025-11-08 12:59:33.971676 | orchestrator | 12:59:33.970 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971679 | orchestrator | 12:59:33.970 STDOUT terraform:  + ip_version = 4 2025-11-08 12:59:33.971683 | orchestrator | 12:59:33.970 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-11-08 12:59:33.971687 | orchestrator | 12:59:33.970 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-11-08 12:59:33.971691 | orchestrator | 12:59:33.970 STDOUT terraform:  + name = "subnet-testbed-management" 2025-11-08 12:59:33.971694 | orchestrator | 12:59:33.970 STDOUT terraform:  + network_id = (known after apply) 2025-11-08 12:59:33.971698 | orchestrator | 12:59:33.970 STDOUT terraform:  + no_gateway = false 2025-11-08 12:59:33.971702 | orchestrator | 12:59:33.970 STDOUT terraform:  + region = (known after apply) 2025-11-08 12:59:33.971705 | orchestrator | 12:59:33.970 STDOUT terraform:  + service_types = (known after apply) 2025-11-08 12:59:33.971730 | orchestrator | 12:59:33.970 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-08 12:59:33.971737 | orchestrator | 12:59:33.970 STDOUT terraform:  + allocation_pool { 2025-11-08 12:59:33.971743 | orchestrator | 12:59:33.970 STDOUT terraform:  + end = "192.168.31.250" 2025-11-08 12:59:33.971747 | orchestrator | 12:59:33.970 STDOUT terraform:  + start = "192.168.31.200" 2025-11-08 12:59:33.971750 | orchestrator | 12:59:33.970 STDOUT terraform:  } 2025-11-08 12:59:33.971754 | orchestrator | 12:59:33.970 STDOUT terraform:  } 2025-11-08 12:59:33.971758 | orchestrator | 12:59:33.970 STDOUT terraform:  # terraform_data.image will be created 2025-11-08 12:59:33.971762 | orchestrator | 12:59:33.970 STDOUT terraform:  + resource "terraform_data" "image" { 2025-11-08 12:59:33.971765 | orchestrator | 12:59:33.970 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971769 | orchestrator | 12:59:33.970 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-11-08 12:59:33.971773 | orchestrator | 12:59:33.970 STDOUT terraform:  + output = (known after apply) 2025-11-08 12:59:33.971777 | orchestrator | 12:59:33.970 STDOUT terraform:  } 2025-11-08 12:59:33.971780 | orchestrator | 12:59:33.970 STDOUT terraform:  # terraform_data.image_node will be created 2025-11-08 12:59:33.971784 | orchestrator | 12:59:33.970 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-11-08 12:59:33.971788 | orchestrator | 12:59:33.970 STDOUT terraform:  + id = (known after apply) 2025-11-08 12:59:33.971791 | orchestrator | 12:59:33.970 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-11-08 12:59:33.971795 | orchestrator | 12:59:33.970 STDOUT terraform:  + output = (known after apply) 2025-11-08 12:59:33.971799 | orchestrator | 12:59:33.970 STDOUT terraform:  } 2025-11-08 12:59:33.971805 | orchestrator | 12:59:33.970 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-11-08 12:59:33.971809 | orchestrator | 12:59:33.970 STDOUT terraform: Changes to Outputs: 2025-11-08 12:59:33.971813 | orchestrator | 12:59:33.970 STDOUT terraform:  + manager_address = (sensitive value) 2025-11-08 12:59:33.971817 | orchestrator | 12:59:33.970 STDOUT terraform:  + private_key = (sensitive value) 2025-11-08 12:59:34.044989 | orchestrator | 12:59:34.044 STDOUT terraform: terraform_data.image_node: Creating... 2025-11-08 12:59:34.196526 | orchestrator | 12:59:34.196 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=17f76c32-b503-8ac5-03d1-76571b3f6a7a] 2025-11-08 12:59:34.197916 | orchestrator | 12:59:34.197 STDOUT terraform: terraform_data.image: Creating... 2025-11-08 12:59:34.201567 | orchestrator | 12:59:34.201 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=6a0236ca-bf8d-79b5-a76a-07b7e2591a1f] 2025-11-08 12:59:34.225256 | orchestrator | 12:59:34.225 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-11-08 12:59:34.228302 | orchestrator | 12:59:34.228 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-11-08 12:59:34.244594 | orchestrator | 12:59:34.242 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-11-08 12:59:34.247667 | orchestrator | 12:59:34.244 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-11-08 12:59:34.247709 | orchestrator | 12:59:34.246 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-11-08 12:59:34.247714 | orchestrator | 12:59:34.246 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-11-08 12:59:34.247718 | orchestrator | 12:59:34.246 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-11-08 12:59:34.247723 | orchestrator | 12:59:34.246 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-11-08 12:59:34.249124 | orchestrator | 12:59:34.248 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-11-08 12:59:34.249662 | orchestrator | 12:59:34.249 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-11-08 12:59:34.696087 | orchestrator | 12:59:34.693 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-11-08 12:59:34.701479 | orchestrator | 12:59:34.701 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-11-08 12:59:34.753783 | orchestrator | 12:59:34.753 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-11-08 12:59:34.758289 | orchestrator | 12:59:34.756 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-11-08 12:59:35.262201 | orchestrator | 12:59:35.258 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=457c6224-b04e-4e3d-b442-baad7371f8ee] 2025-11-08 12:59:35.555435 | orchestrator | 12:59:35.264 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-11-08 12:59:35.555493 | orchestrator | 12:59:35.326 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-11-08 12:59:35.555505 | orchestrator | 12:59:35.336 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-11-08 12:59:37.884155 | orchestrator | 12:59:37.883 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=f84a4500-4dd6-44ad-a9ff-274f9f36fc36] 2025-11-08 12:59:37.901874 | orchestrator | 12:59:37.901 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-11-08 12:59:37.910524 | orchestrator | 12:59:37.910 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=4c7606f474ac31466c54654c9e2c1891b36e7fcc] 2025-11-08 12:59:37.920783 | orchestrator | 12:59:37.920 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-11-08 12:59:37.922835 | orchestrator | 12:59:37.922 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=dc29408d-4f3e-478d-82da-c226aaca029c] 2025-11-08 12:59:37.925832 | orchestrator | 12:59:37.925 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=753444d52d20071026189446128aa2e4a47884ce] 2025-11-08 12:59:37.928407 | orchestrator | 12:59:37.928 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-11-08 12:59:37.929466 | orchestrator | 12:59:37.929 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-11-08 12:59:37.946792 | orchestrator | 12:59:37.946 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=a45a4cf7-d855-4857-b9ae-b573b3c7176d] 2025-11-08 12:59:37.947506 | orchestrator | 12:59:37.947 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=92c2e246-dc93-49f1-98da-a6574bccf4cb] 2025-11-08 12:59:37.952285 | orchestrator | 12:59:37.952 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-11-08 12:59:37.953072 | orchestrator | 12:59:37.952 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-11-08 12:59:37.953478 | orchestrator | 12:59:37.953 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=c4ff64d0-4838-4e36-9da9-d01e7c6d3995] 2025-11-08 12:59:37.963009 | orchestrator | 12:59:37.962 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-11-08 12:59:37.968993 | orchestrator | 12:59:37.968 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=3757d830-b0af-49e2-85a4-9877085f3a2f] 2025-11-08 12:59:37.973843 | orchestrator | 12:59:37.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b] 2025-11-08 12:59:37.974424 | orchestrator | 12:59:37.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-11-08 12:59:37.981558 | orchestrator | 12:59:37.981 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-11-08 12:59:38.034450 | orchestrator | 12:59:38.034 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=4485c49e-1f3e-4177-b8cf-e377966726ff] 2025-11-08 12:59:38.034540 | orchestrator | 12:59:38.034 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=ce3e3473-55e8-454e-8a0a-ac291b184d20] 2025-11-08 12:59:38.736041 | orchestrator | 12:59:38.735 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=61aa7527-5f11-4945-bb69-ee2ef88e1c47] 2025-11-08 12:59:38.816914 | orchestrator | 12:59:38.816 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=e976e1d0-a462-44f4-a89a-07b866acc7b1] 2025-11-08 12:59:38.823997 | orchestrator | 12:59:38.823 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-11-08 12:59:41.327727 | orchestrator | 12:59:41.327 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=cd27e0c9-617b-4a12-acb0-00efb73b425b] 2025-11-08 12:59:41.410133 | orchestrator | 12:59:41.406 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165] 2025-11-08 12:59:41.424621 | orchestrator | 12:59:41.424 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=d1fc42fd-5332-49a1-9701-fd67e0fd5d8d] 2025-11-08 12:59:41.452605 | orchestrator | 12:59:41.452 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=7bcb89ad-f0c3-4ca7-8180-786cf7e929b8] 2025-11-08 12:59:41.468012 | orchestrator | 12:59:41.467 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=9492338a-04c6-4dbd-b6d4-47c0f3d58df2] 2025-11-08 12:59:41.471476 | orchestrator | 12:59:41.471 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=e00aeba1-5189-4db1-bd39-a4f48e1f1ff4] 2025-11-08 12:59:41.929485 | orchestrator | 12:59:41.929 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=7de6183c-577a-4c00-a0d8-3d0fd1e33005] 2025-11-08 12:59:41.942201 | orchestrator | 12:59:41.941 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-11-08 12:59:41.942919 | orchestrator | 12:59:41.942 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-11-08 12:59:41.943823 | orchestrator | 12:59:41.943 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-11-08 12:59:42.142382 | orchestrator | 12:59:42.142 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4916b338-6833-4644-aeb1-90d620e95edb] 2025-11-08 12:59:42.150620 | orchestrator | 12:59:42.150 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-11-08 12:59:42.152336 | orchestrator | 12:59:42.152 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-11-08 12:59:42.152843 | orchestrator | 12:59:42.152 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-11-08 12:59:42.153223 | orchestrator | 12:59:42.153 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-11-08 12:59:42.153537 | orchestrator | 12:59:42.153 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-11-08 12:59:42.156772 | orchestrator | 12:59:42.156 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-11-08 12:59:42.178939 | orchestrator | 12:59:42.178 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=1a2fc66a-f7c1-40c4-8904-59b2fe155028] 2025-11-08 12:59:42.192686 | orchestrator | 12:59:42.191 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-11-08 12:59:42.196025 | orchestrator | 12:59:42.195 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-11-08 12:59:42.198754 | orchestrator | 12:59:42.198 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-11-08 12:59:42.355563 | orchestrator | 12:59:42.355 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=3729b5a4-b970-4708-a6b4-c1fce76abbe7] 2025-11-08 12:59:42.362108 | orchestrator | 12:59:42.361 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-11-08 12:59:42.368094 | orchestrator | 12:59:42.365 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=a634ed99-b5ce-4f7d-b009-9a2390a4b605] 2025-11-08 12:59:42.374104 | orchestrator | 12:59:42.373 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-11-08 12:59:42.511616 | orchestrator | 12:59:42.511 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=27cbffc3-0617-4836-b09b-fcf4cdd4bb53] 2025-11-08 12:59:42.521515 | orchestrator | 12:59:42.521 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-11-08 12:59:42.546398 | orchestrator | 12:59:42.546 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=1482aeea-c9c7-48b4-903e-4d051bb28562] 2025-11-08 12:59:42.561668 | orchestrator | 12:59:42.561 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-11-08 12:59:42.754843 | orchestrator | 12:59:42.754 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=1e654b62-805b-41e5-a1a3-07cecdea40b9] 2025-11-08 12:59:42.765875 | orchestrator | 12:59:42.765 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-11-08 12:59:42.836807 | orchestrator | 12:59:42.836 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=5316225b-76ec-414d-899f-fa79ff9cf5d9] 2025-11-08 12:59:42.842132 | orchestrator | 12:59:42.841 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-11-08 12:59:42.970810 | orchestrator | 12:59:42.970 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=3d10797f-2aa3-4f5b-88f9-d1c33418a853] 2025-11-08 12:59:42.975642 | orchestrator | 12:59:42.975 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-11-08 12:59:43.031001 | orchestrator | 12:59:43.030 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=a2bfe24a-3eb8-41e4-bc30-30d86fc3ec26] 2025-11-08 12:59:43.085218 | orchestrator | 12:59:43.084 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=2021adbc-2710-4624-a423-36b5a30380f6] 2025-11-08 12:59:43.143202 | orchestrator | 12:59:43.142 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=8de9695d-870b-488b-b385-e3f4290f331c] 2025-11-08 12:59:43.280729 | orchestrator | 12:59:43.280 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=164d56c5-9f32-4e36-b7e2-ef2c4962aa52] 2025-11-08 12:59:43.355386 | orchestrator | 12:59:43.355 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=c70bd975-3465-47c0-8e74-d888527cdf7d] 2025-11-08 12:59:43.373604 | orchestrator | 12:59:43.373 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=01c3e288-2a01-4f16-a2d7-a99438c35796] 2025-11-08 12:59:43.389219 | orchestrator | 12:59:43.388 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=32c60202-36f3-4586-ac0f-6639e52e5c86] 2025-11-08 12:59:43.526610 | orchestrator | 12:59:43.526 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=158662cd-e2b4-4b4c-8caf-944d3329fa35] 2025-11-08 12:59:43.927795 | orchestrator | 12:59:43.927 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=996f100f-3857-4cfe-8204-922aeaf8a95b] 2025-11-08 12:59:44.543820 | orchestrator | 12:59:44.543 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=d3ffdc73-8d13-46eb-8a20-d3ab7089e6b0] 2025-11-08 12:59:44.573625 | orchestrator | 12:59:44.573 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-11-08 12:59:44.586482 | orchestrator | 12:59:44.586 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-11-08 12:59:44.587068 | orchestrator | 12:59:44.586 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-11-08 12:59:44.591269 | orchestrator | 12:59:44.591 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-11-08 12:59:44.601003 | orchestrator | 12:59:44.600 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-11-08 12:59:44.601043 | orchestrator | 12:59:44.600 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-11-08 12:59:44.601095 | orchestrator | 12:59:44.601 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-11-08 12:59:46.332428 | orchestrator | 12:59:46.332 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=3d433f18-437d-4afd-bbdf-b4ebf967305c] 2025-11-08 12:59:46.349661 | orchestrator | 12:59:46.348 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-11-08 12:59:46.349733 | orchestrator | 12:59:46.348 STDOUT terraform: local_file.inventory: Creating... 2025-11-08 12:59:46.352497 | orchestrator | 12:59:46.351 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-11-08 12:59:46.356983 | orchestrator | 12:59:46.356 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=49a8901c46ac7ee43639b78c5b77cb71c85619bf] 2025-11-08 12:59:46.357260 | orchestrator | 12:59:46.357 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=8b2dca0bb2f0ab5f325409bfa9699f7e9f7cfbdd] 2025-11-08 12:59:47.599518 | orchestrator | 12:59:47.599 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=3d433f18-437d-4afd-bbdf-b4ebf967305c] 2025-11-08 12:59:54.591683 | orchestrator | 12:59:54.591 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-11-08 12:59:54.591831 | orchestrator | 12:59:54.591 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-11-08 12:59:54.596768 | orchestrator | 12:59:54.596 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-11-08 12:59:54.602246 | orchestrator | 12:59:54.601 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-11-08 12:59:54.602323 | orchestrator | 12:59:54.602 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-11-08 12:59:54.602469 | orchestrator | 12:59:54.602 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-11-08 13:00:04.591742 | orchestrator | 13:00:04.591 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-11-08 13:00:04.591848 | orchestrator | 13:00:04.591 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-11-08 13:00:04.597807 | orchestrator | 13:00:04.597 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-11-08 13:00:04.603172 | orchestrator | 13:00:04.602 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-11-08 13:00:04.603249 | orchestrator | 13:00:04.603 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-11-08 13:00:04.603265 | orchestrator | 13:00:04.603 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-11-08 13:00:04.971835 | orchestrator | 13:00:04.971 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=3a8265e5-9060-4e71-827f-abb2392bd419] 2025-11-08 13:00:05.162002 | orchestrator | 13:00:05.161 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=082bdc9b-f25c-4c29-a297-5d4ddf94a870] 2025-11-08 13:00:05.183817 | orchestrator | 13:00:05.183 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=350e8d58-71c1-48a7-a545-1127e85feb46] 2025-11-08 13:00:05.268005 | orchestrator | 13:00:05.267 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=24b21ff8-a175-4aec-80d0-8b6c56ba52ee] 2025-11-08 13:00:14.593911 | orchestrator | 13:00:14.593 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-11-08 13:00:14.604304 | orchestrator | 13:00:14.603 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-11-08 13:00:15.212698 | orchestrator | 13:00:15.212 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=8e84f382-25b0-4155-a635-1e54501a65c5] 2025-11-08 13:00:15.344723 | orchestrator | 13:00:15.344 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=9893330d-06cf-48b8-88c1-f96151edb39f] 2025-11-08 13:00:15.372680 | orchestrator | 13:00:15.372 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-11-08 13:00:15.373908 | orchestrator | 13:00:15.373 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-11-08 13:00:15.377238 | orchestrator | 13:00:15.375 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-11-08 13:00:15.387811 | orchestrator | 13:00:15.387 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4716029668702586993] 2025-11-08 13:00:15.392576 | orchestrator | 13:00:15.392 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-11-08 13:00:15.395452 | orchestrator | 13:00:15.395 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-11-08 13:00:15.400824 | orchestrator | 13:00:15.400 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-11-08 13:00:15.406141 | orchestrator | 13:00:15.405 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-11-08 13:00:15.408374 | orchestrator | 13:00:15.408 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-11-08 13:00:15.408690 | orchestrator | 13:00:15.408 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-11-08 13:00:15.421662 | orchestrator | 13:00:15.421 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-11-08 13:00:15.435175 | orchestrator | 13:00:15.434 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-11-08 13:00:18.759205 | orchestrator | 13:00:18.758 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=24b21ff8-a175-4aec-80d0-8b6c56ba52ee/c4ff64d0-4838-4e36-9da9-d01e7c6d3995] 2025-11-08 13:00:18.764618 | orchestrator | 13:00:18.764 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=350e8d58-71c1-48a7-a545-1127e85feb46/e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b] 2025-11-08 13:00:18.853646 | orchestrator | 13:00:18.853 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=082bdc9b-f25c-4c29-a297-5d4ddf94a870/a45a4cf7-d855-4857-b9ae-b573b3c7176d] 2025-11-08 13:00:18.855251 | orchestrator | 13:00:18.854 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=24b21ff8-a175-4aec-80d0-8b6c56ba52ee/f84a4500-4dd6-44ad-a9ff-274f9f36fc36] 2025-11-08 13:00:18.877898 | orchestrator | 13:00:18.877 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=350e8d58-71c1-48a7-a545-1127e85feb46/3757d830-b0af-49e2-85a4-9877085f3a2f] 2025-11-08 13:00:18.915493 | orchestrator | 13:00:18.914 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=082bdc9b-f25c-4c29-a297-5d4ddf94a870/dc29408d-4f3e-478d-82da-c226aaca029c] 2025-11-08 13:00:24.953490 | orchestrator | 13:00:24.952 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=24b21ff8-a175-4aec-80d0-8b6c56ba52ee/4485c49e-1f3e-4177-b8cf-e377966726ff] 2025-11-08 13:00:25.012276 | orchestrator | 13:00:25.011 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=350e8d58-71c1-48a7-a545-1127e85feb46/ce3e3473-55e8-454e-8a0a-ac291b184d20] 2025-11-08 13:00:25.027299 | orchestrator | 13:00:25.026 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=082bdc9b-f25c-4c29-a297-5d4ddf94a870/92c2e246-dc93-49f1-98da-a6574bccf4cb] 2025-11-08 13:00:25.436300 | orchestrator | 13:00:25.436 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-11-08 13:00:35.439559 | orchestrator | 13:00:35.439 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-11-08 13:00:35.937892 | orchestrator | 13:00:35.937 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=f53c7e40-d2fe-4961-9d4f-a33802288d04] 2025-11-08 13:00:35.959066 | orchestrator | 13:00:35.958 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-11-08 13:00:35.959176 | orchestrator | 13:00:35.958 STDOUT terraform: Outputs: 2025-11-08 13:00:35.959192 | orchestrator | 13:00:35.958 STDOUT terraform: manager_address = 2025-11-08 13:00:35.959206 | orchestrator | 13:00:35.958 STDOUT terraform: private_key = 2025-11-08 13:00:36.109386 | orchestrator | ok: Runtime: 0:01:09.383117 2025-11-08 13:00:36.141673 | 2025-11-08 13:00:36.141788 | TASK [Create infrastructure (stable)] 2025-11-08 13:00:36.674827 | orchestrator | skipping: Conditional result was False 2025-11-08 13:00:36.684840 | 2025-11-08 13:00:36.684959 | TASK [Fetch manager address] 2025-11-08 13:00:37.127818 | orchestrator | ok 2025-11-08 13:00:37.137678 | 2025-11-08 13:00:37.137800 | TASK [Set manager_host address] 2025-11-08 13:00:37.218895 | orchestrator | ok 2025-11-08 13:00:37.228637 | 2025-11-08 13:00:37.228768 | LOOP [Update ansible collections] 2025-11-08 13:00:40.411400 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-11-08 13:00:40.411978 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-08 13:00:40.412058 | orchestrator | Starting galaxy collection install process 2025-11-08 13:00:40.412098 | orchestrator | Process install dependency map 2025-11-08 13:00:40.412134 | orchestrator | Starting collection install process 2025-11-08 13:00:40.412168 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-11-08 13:00:40.412209 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-11-08 13:00:40.412252 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-11-08 13:00:40.412328 | orchestrator | ok: Item: commons Runtime: 0:00:02.842216 2025-11-08 13:00:41.942599 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-11-08 13:00:41.943268 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-08 13:00:41.943362 | orchestrator | Starting galaxy collection install process 2025-11-08 13:00:41.943405 | orchestrator | Process install dependency map 2025-11-08 13:00:41.943441 | orchestrator | Starting collection install process 2025-11-08 13:00:41.943475 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-11-08 13:00:41.943508 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-11-08 13:00:41.943540 | orchestrator | osism.services:999.0.0 was installed successfully 2025-11-08 13:00:41.943610 | orchestrator | ok: Item: services Runtime: 0:00:01.260246 2025-11-08 13:00:41.964040 | 2025-11-08 13:00:41.964182 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-11-08 13:00:52.513903 | orchestrator | ok 2025-11-08 13:00:52.526094 | 2025-11-08 13:00:52.526219 | TASK [Wait a little longer for the manager so that everything is ready] 2025-11-08 13:01:52.569274 | orchestrator | ok 2025-11-08 13:01:52.579214 | 2025-11-08 13:01:52.579333 | TASK [Fetch manager ssh hostkey] 2025-11-08 13:01:54.149006 | orchestrator | Output suppressed because no_log was given 2025-11-08 13:01:54.165938 | 2025-11-08 13:01:54.166117 | TASK [Get ssh keypair from terraform environment] 2025-11-08 13:01:54.702215 | orchestrator | ok: Runtime: 0:00:00.009052 2025-11-08 13:01:54.709896 | 2025-11-08 13:01:54.710015 | TASK [Point out that the following task takes some time and does not give any output] 2025-11-08 13:01:54.739481 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-11-08 13:01:54.746212 | 2025-11-08 13:01:54.746312 | TASK [Run manager part 0] 2025-11-08 13:01:55.779770 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-08 13:01:55.826829 | orchestrator | 2025-11-08 13:01:55.826863 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-11-08 13:01:55.826870 | orchestrator | 2025-11-08 13:01:55.826883 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-11-08 13:01:57.722165 | orchestrator | ok: [testbed-manager] 2025-11-08 13:01:57.722220 | orchestrator | 2025-11-08 13:01:57.722242 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-11-08 13:01:57.722252 | orchestrator | 2025-11-08 13:01:57.722266 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-08 13:01:59.689880 | orchestrator | ok: [testbed-manager] 2025-11-08 13:01:59.689925 | orchestrator | 2025-11-08 13:01:59.690007 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-11-08 13:02:00.280856 | orchestrator | ok: [testbed-manager] 2025-11-08 13:02:00.280946 | orchestrator | 2025-11-08 13:02:00.281033 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-11-08 13:02:00.326540 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:02:00.326590 | orchestrator | 2025-11-08 13:02:00.326601 | orchestrator | TASK [Update package cache] **************************************************** 2025-11-08 13:02:00.356905 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:02:00.356995 | orchestrator | 2025-11-08 13:02:00.357008 | orchestrator | TASK [Install required packages] *********************************************** 2025-11-08 13:02:00.398535 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:02:00.398625 | orchestrator | 2025-11-08 13:02:00.398647 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-11-08 13:02:00.428060 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:02:00.428118 | orchestrator | 2025-11-08 13:02:00.428129 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-11-08 13:02:00.464205 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:02:00.464277 | orchestrator | 2025-11-08 13:02:00.464291 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2025-11-08 13:02:00.505901 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:02:00.505951 | orchestrator | 2025-11-08 13:02:00.505960 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-11-08 13:02:00.533716 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:02:00.533777 | orchestrator | 2025-11-08 13:02:00.533793 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-11-08 13:02:01.289214 | orchestrator | changed: [testbed-manager] 2025-11-08 13:02:01.289261 | orchestrator | 2025-11-08 13:02:01.289270 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-11-08 13:04:31.896662 | orchestrator | changed: [testbed-manager] 2025-11-08 13:04:31.896742 | orchestrator | 2025-11-08 13:04:31.896763 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-11-08 13:05:50.424456 | orchestrator | changed: [testbed-manager] 2025-11-08 13:05:50.424561 | orchestrator | 2025-11-08 13:05:50.424580 | orchestrator | TASK [Install required packages] *********************************************** 2025-11-08 13:06:09.541162 | orchestrator | changed: [testbed-manager] 2025-11-08 13:06:09.541255 | orchestrator | 2025-11-08 13:06:09.541274 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-11-08 13:06:17.941111 | orchestrator | changed: [testbed-manager] 2025-11-08 13:06:17.941155 | orchestrator | 2025-11-08 13:06:17.941164 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-11-08 13:06:17.988358 | orchestrator | ok: [testbed-manager] 2025-11-08 13:06:17.988393 | orchestrator | 2025-11-08 13:06:17.988402 | orchestrator | TASK [Get current user] ******************************************************** 2025-11-08 13:06:18.755430 | orchestrator | ok: [testbed-manager] 2025-11-08 13:06:18.755516 | orchestrator | 2025-11-08 13:06:18.755534 | orchestrator | TASK [Create venv directory] *************************************************** 2025-11-08 13:06:19.498453 | orchestrator | changed: [testbed-manager] 2025-11-08 13:06:19.498540 | orchestrator | 2025-11-08 13:06:19.498556 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-11-08 13:06:25.906724 | orchestrator | changed: [testbed-manager] 2025-11-08 13:06:25.906804 | orchestrator | 2025-11-08 13:06:25.906837 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-11-08 13:06:31.984802 | orchestrator | changed: [testbed-manager] 2025-11-08 13:06:31.984920 | orchestrator | 2025-11-08 13:06:31.984940 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-11-08 13:06:34.530408 | orchestrator | changed: [testbed-manager] 2025-11-08 13:06:34.530495 | orchestrator | 2025-11-08 13:06:34.530509 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-11-08 13:06:36.258760 | orchestrator | changed: [testbed-manager] 2025-11-08 13:06:36.258808 | orchestrator | 2025-11-08 13:06:36.258816 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-11-08 13:06:37.359351 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-11-08 13:06:37.359434 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-11-08 13:06:37.359449 | orchestrator | 2025-11-08 13:06:37.359462 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-11-08 13:06:37.401115 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-11-08 13:06:37.401160 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-11-08 13:06:37.401166 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-11-08 13:06:37.401171 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-11-08 13:06:46.883370 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-11-08 13:06:46.883464 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-11-08 13:06:46.883479 | orchestrator | 2025-11-08 13:06:46.883492 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-11-08 13:06:47.428033 | orchestrator | changed: [testbed-manager] 2025-11-08 13:06:47.428118 | orchestrator | 2025-11-08 13:06:47.428132 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-11-08 13:10:07.632891 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-11-08 13:10:07.632993 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-11-08 13:10:07.633011 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-11-08 13:10:07.633024 | orchestrator | 2025-11-08 13:10:07.633036 | orchestrator | TASK [Install local collections] *********************************************** 2025-11-08 13:10:09.894232 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-11-08 13:10:09.894262 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-11-08 13:10:09.894266 | orchestrator | 2025-11-08 13:10:09.894271 | orchestrator | PLAY [Create operator user] **************************************************** 2025-11-08 13:10:09.894276 | orchestrator | 2025-11-08 13:10:09.894280 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-08 13:10:11.241416 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:11.241490 | orchestrator | 2025-11-08 13:10:11.241508 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-11-08 13:10:11.290194 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:11.290244 | orchestrator | 2025-11-08 13:10:11.290251 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-11-08 13:10:11.350860 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:11.350907 | orchestrator | 2025-11-08 13:10:11.350913 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-11-08 13:10:12.065121 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:12.065150 | orchestrator | 2025-11-08 13:10:12.065156 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-11-08 13:10:12.758781 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:12.758872 | orchestrator | 2025-11-08 13:10:12.758891 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-11-08 13:10:14.087617 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-11-08 13:10:14.087652 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-11-08 13:10:14.087660 | orchestrator | 2025-11-08 13:10:14.087673 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-11-08 13:10:15.380654 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:15.380710 | orchestrator | 2025-11-08 13:10:15.380718 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-11-08 13:10:17.090523 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-11-08 13:10:17.090744 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-11-08 13:10:17.090761 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-11-08 13:10:17.090773 | orchestrator | 2025-11-08 13:10:17.090786 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-11-08 13:10:17.150689 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:10:17.150755 | orchestrator | 2025-11-08 13:10:17.150769 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2025-11-08 13:10:17.217791 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:10:17.217849 | orchestrator | 2025-11-08 13:10:17.217859 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-11-08 13:10:17.732657 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:17.732693 | orchestrator | 2025-11-08 13:10:17.732701 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-11-08 13:10:17.798076 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:10:17.798113 | orchestrator | 2025-11-08 13:10:17.798122 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-11-08 13:10:18.605518 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-08 13:10:18.605601 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:18.605616 | orchestrator | 2025-11-08 13:10:18.605630 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-11-08 13:10:18.645621 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:10:18.645707 | orchestrator | 2025-11-08 13:10:18.645725 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-11-08 13:10:18.680808 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:10:18.680880 | orchestrator | 2025-11-08 13:10:18.680893 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-11-08 13:10:18.714548 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:10:18.714603 | orchestrator | 2025-11-08 13:10:18.714619 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-11-08 13:10:18.774966 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:10:18.775020 | orchestrator | 2025-11-08 13:10:18.775036 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-11-08 13:10:19.493712 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:19.493805 | orchestrator | 2025-11-08 13:10:19.493845 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-11-08 13:10:19.493859 | orchestrator | 2025-11-08 13:10:19.493870 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-08 13:10:20.842617 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:20.842661 | orchestrator | 2025-11-08 13:10:20.842667 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-11-08 13:10:21.783261 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:21.783300 | orchestrator | 2025-11-08 13:10:21.783306 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:10:21.783312 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2025-11-08 13:10:21.783316 | orchestrator | 2025-11-08 13:10:22.070143 | orchestrator | ok: Runtime: 0:08:26.822867 2025-11-08 13:10:22.086420 | 2025-11-08 13:10:22.086578 | TASK [Point out that the log in on the manager is now possible] 2025-11-08 13:10:22.119232 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-11-08 13:10:22.127404 | 2025-11-08 13:10:22.127512 | TASK [Point out that the following task takes some time and does not give any output] 2025-11-08 13:10:22.159333 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-11-08 13:10:22.167400 | 2025-11-08 13:10:22.167506 | TASK [Run manager part 1 + 2] 2025-11-08 13:10:23.260634 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-08 13:10:23.315952 | orchestrator | 2025-11-08 13:10:23.316007 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-11-08 13:10:23.316015 | orchestrator | 2025-11-08 13:10:23.316028 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-08 13:10:26.235684 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:26.235777 | orchestrator | 2025-11-08 13:10:26.235850 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-11-08 13:10:26.279052 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:10:26.279118 | orchestrator | 2025-11-08 13:10:26.279131 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-11-08 13:10:26.323849 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:26.323907 | orchestrator | 2025-11-08 13:10:26.323916 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-08 13:10:26.365220 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:26.365275 | orchestrator | 2025-11-08 13:10:26.365285 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-08 13:10:26.431954 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:26.432007 | orchestrator | 2025-11-08 13:10:26.432014 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-08 13:10:26.488935 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:26.488983 | orchestrator | 2025-11-08 13:10:26.488991 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-08 13:10:26.535338 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-11-08 13:10:26.535390 | orchestrator | 2025-11-08 13:10:26.535397 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-08 13:10:27.254664 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:27.254891 | orchestrator | 2025-11-08 13:10:27.254914 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-08 13:10:27.305626 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:10:27.305686 | orchestrator | 2025-11-08 13:10:27.305693 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-08 13:10:28.702920 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:28.703007 | orchestrator | 2025-11-08 13:10:28.703024 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-08 13:10:29.295006 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:29.295094 | orchestrator | 2025-11-08 13:10:29.295112 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-08 13:10:30.437804 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:30.437911 | orchestrator | 2025-11-08 13:10:30.437929 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-08 13:10:47.169303 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:47.169392 | orchestrator | 2025-11-08 13:10:47.169406 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-11-08 13:10:47.790103 | orchestrator | ok: [testbed-manager] 2025-11-08 13:10:47.790176 | orchestrator | 2025-11-08 13:10:47.790189 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-11-08 13:10:47.838343 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:10:47.838395 | orchestrator | 2025-11-08 13:10:47.838406 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-11-08 13:10:48.708541 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:48.708603 | orchestrator | 2025-11-08 13:10:48.708612 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-11-08 13:10:49.617578 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:49.617638 | orchestrator | 2025-11-08 13:10:49.617647 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-11-08 13:10:50.183695 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:50.183777 | orchestrator | 2025-11-08 13:10:50.183793 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-11-08 13:10:50.223368 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-11-08 13:10:50.223477 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-11-08 13:10:50.223494 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-11-08 13:10:50.223507 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-11-08 13:10:52.769218 | orchestrator | changed: [testbed-manager] 2025-11-08 13:10:52.769314 | orchestrator | 2025-11-08 13:10:52.769331 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-11-08 13:11:01.381602 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-11-08 13:11:01.381708 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-11-08 13:11:01.381727 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-11-08 13:11:01.381741 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-11-08 13:11:01.381762 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-11-08 13:11:01.381774 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-11-08 13:11:01.381785 | orchestrator | 2025-11-08 13:11:01.381798 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-11-08 13:11:02.431184 | orchestrator | changed: [testbed-manager] 2025-11-08 13:11:02.431227 | orchestrator | 2025-11-08 13:11:02.431234 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-11-08 13:11:02.472535 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:11:02.472576 | orchestrator | 2025-11-08 13:11:02.472583 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-11-08 13:11:05.492589 | orchestrator | changed: [testbed-manager] 2025-11-08 13:11:05.492634 | orchestrator | 2025-11-08 13:11:05.492643 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-11-08 13:11:05.536726 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:11:05.536767 | orchestrator | 2025-11-08 13:11:05.536777 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-11-08 13:12:39.611442 | orchestrator | changed: [testbed-manager] 2025-11-08 13:12:39.611551 | orchestrator | 2025-11-08 13:12:39.611559 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-08 13:12:40.708380 | orchestrator | ok: [testbed-manager] 2025-11-08 13:12:40.708468 | orchestrator | 2025-11-08 13:12:40.708484 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:12:40.708498 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-11-08 13:12:40.708509 | orchestrator | 2025-11-08 13:12:41.301988 | orchestrator | ok: Runtime: 0:02:18.337035 2025-11-08 13:12:41.317721 | 2025-11-08 13:12:41.317866 | TASK [Reboot manager] 2025-11-08 13:12:42.853203 | orchestrator | ok: Runtime: 0:00:00.951026 2025-11-08 13:12:42.869727 | 2025-11-08 13:12:42.869877 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-11-08 13:12:56.840440 | orchestrator | ok 2025-11-08 13:12:56.850168 | 2025-11-08 13:12:56.850288 | TASK [Wait a little longer for the manager so that everything is ready] 2025-11-08 13:13:56.898095 | orchestrator | ok 2025-11-08 13:13:56.908485 | 2025-11-08 13:13:56.908609 | TASK [Deploy manager + bootstrap nodes] 2025-11-08 13:13:59.377923 | orchestrator | 2025-11-08 13:13:59.378410 | orchestrator | # DEPLOY MANAGER 2025-11-08 13:13:59.378461 | orchestrator | 2025-11-08 13:13:59.378488 | orchestrator | + set -e 2025-11-08 13:13:59.378511 | orchestrator | + echo 2025-11-08 13:13:59.378530 | orchestrator | + echo '# DEPLOY MANAGER' 2025-11-08 13:13:59.378550 | orchestrator | + echo 2025-11-08 13:13:59.378600 | orchestrator | + cat /opt/manager-vars.sh 2025-11-08 13:13:59.381031 | orchestrator | export NUMBER_OF_NODES=6 2025-11-08 13:13:59.381125 | orchestrator | 2025-11-08 13:13:59.381141 | orchestrator | export CEPH_VERSION=reef 2025-11-08 13:13:59.381157 | orchestrator | export CONFIGURATION_VERSION=main 2025-11-08 13:13:59.381170 | orchestrator | export MANAGER_VERSION=latest 2025-11-08 13:13:59.381201 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-11-08 13:13:59.381212 | orchestrator | 2025-11-08 13:13:59.381232 | orchestrator | export ARA=false 2025-11-08 13:13:59.381243 | orchestrator | export DEPLOY_MODE=manager 2025-11-08 13:13:59.381262 | orchestrator | export TEMPEST=false 2025-11-08 13:13:59.381274 | orchestrator | export IS_ZUUL=true 2025-11-08 13:13:59.381285 | orchestrator | 2025-11-08 13:13:59.381304 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 13:13:59.381316 | orchestrator | export EXTERNAL_API=false 2025-11-08 13:13:59.381327 | orchestrator | 2025-11-08 13:13:59.381339 | orchestrator | export IMAGE_USER=ubuntu 2025-11-08 13:13:59.381353 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-11-08 13:13:59.381364 | orchestrator | 2025-11-08 13:13:59.381375 | orchestrator | export CEPH_STACK=ceph-ansible 2025-11-08 13:13:59.381398 | orchestrator | 2025-11-08 13:13:59.381410 | orchestrator | + echo 2025-11-08 13:13:59.381423 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-08 13:13:59.382071 | orchestrator | ++ export INTERACTIVE=false 2025-11-08 13:13:59.382094 | orchestrator | ++ INTERACTIVE=false 2025-11-08 13:13:59.382106 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-08 13:13:59.382118 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-08 13:13:59.382310 | orchestrator | + source /opt/manager-vars.sh 2025-11-08 13:13:59.382327 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-08 13:13:59.382339 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-08 13:13:59.382350 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-08 13:13:59.382361 | orchestrator | ++ CEPH_VERSION=reef 2025-11-08 13:13:59.382372 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-08 13:13:59.382384 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-08 13:13:59.382395 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-08 13:13:59.382407 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-08 13:13:59.382418 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-08 13:13:59.382441 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-08 13:13:59.382458 | orchestrator | ++ export ARA=false 2025-11-08 13:13:59.382469 | orchestrator | ++ ARA=false 2025-11-08 13:13:59.382481 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-08 13:13:59.382492 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-08 13:13:59.382503 | orchestrator | ++ export TEMPEST=false 2025-11-08 13:13:59.382514 | orchestrator | ++ TEMPEST=false 2025-11-08 13:13:59.382525 | orchestrator | ++ export IS_ZUUL=true 2025-11-08 13:13:59.382536 | orchestrator | ++ IS_ZUUL=true 2025-11-08 13:13:59.382547 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 13:13:59.382559 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 13:13:59.382574 | orchestrator | ++ export EXTERNAL_API=false 2025-11-08 13:13:59.382586 | orchestrator | ++ EXTERNAL_API=false 2025-11-08 13:13:59.382597 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-08 13:13:59.382608 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-08 13:13:59.382619 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-08 13:13:59.382630 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-08 13:13:59.382642 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-08 13:13:59.382653 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-08 13:13:59.382762 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-11-08 13:13:59.445933 | orchestrator | + docker version 2025-11-08 13:13:59.708064 | orchestrator | Client: Docker Engine - Community 2025-11-08 13:13:59.708156 | orchestrator | Version: 27.5.1 2025-11-08 13:13:59.708172 | orchestrator | API version: 1.47 2025-11-08 13:13:59.708184 | orchestrator | Go version: go1.22.11 2025-11-08 13:13:59.708195 | orchestrator | Git commit: 9f9e405 2025-11-08 13:13:59.708207 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-11-08 13:13:59.708219 | orchestrator | OS/Arch: linux/amd64 2025-11-08 13:13:59.708230 | orchestrator | Context: default 2025-11-08 13:13:59.708242 | orchestrator | 2025-11-08 13:13:59.708253 | orchestrator | Server: Docker Engine - Community 2025-11-08 13:13:59.708264 | orchestrator | Engine: 2025-11-08 13:13:59.708276 | orchestrator | Version: 27.5.1 2025-11-08 13:13:59.708287 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-11-08 13:13:59.708333 | orchestrator | Go version: go1.22.11 2025-11-08 13:13:59.708344 | orchestrator | Git commit: 4c9b3b0 2025-11-08 13:13:59.708356 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-11-08 13:13:59.708367 | orchestrator | OS/Arch: linux/amd64 2025-11-08 13:13:59.708378 | orchestrator | Experimental: false 2025-11-08 13:13:59.708389 | orchestrator | containerd: 2025-11-08 13:13:59.708400 | orchestrator | Version: v1.7.29 2025-11-08 13:13:59.708411 | orchestrator | GitCommit: 442cb34bda9a6a0fed82a2ca7cade05c5c749582 2025-11-08 13:13:59.708423 | orchestrator | runc: 2025-11-08 13:13:59.708434 | orchestrator | Version: 1.3.3 2025-11-08 13:13:59.708445 | orchestrator | GitCommit: v1.3.3-0-gd842d771 2025-11-08 13:13:59.708456 | orchestrator | docker-init: 2025-11-08 13:13:59.708467 | orchestrator | Version: 0.19.0 2025-11-08 13:13:59.708491 | orchestrator | GitCommit: de40ad0 2025-11-08 13:13:59.711465 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-11-08 13:13:59.721029 | orchestrator | + set -e 2025-11-08 13:13:59.721053 | orchestrator | + source /opt/manager-vars.sh 2025-11-08 13:13:59.721066 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-08 13:13:59.721077 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-08 13:13:59.721088 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-08 13:13:59.721099 | orchestrator | ++ CEPH_VERSION=reef 2025-11-08 13:13:59.721110 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-08 13:13:59.721122 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-08 13:13:59.721133 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-08 13:13:59.721144 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-08 13:13:59.721155 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-08 13:13:59.721166 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-08 13:13:59.721177 | orchestrator | ++ export ARA=false 2025-11-08 13:13:59.721188 | orchestrator | ++ ARA=false 2025-11-08 13:13:59.721199 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-08 13:13:59.721210 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-08 13:13:59.721221 | orchestrator | ++ export TEMPEST=false 2025-11-08 13:13:59.721232 | orchestrator | ++ TEMPEST=false 2025-11-08 13:13:59.721243 | orchestrator | ++ export IS_ZUUL=true 2025-11-08 13:13:59.721254 | orchestrator | ++ IS_ZUUL=true 2025-11-08 13:13:59.721265 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 13:13:59.721276 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 13:13:59.721287 | orchestrator | ++ export EXTERNAL_API=false 2025-11-08 13:13:59.721298 | orchestrator | ++ EXTERNAL_API=false 2025-11-08 13:13:59.721309 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-08 13:13:59.721320 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-08 13:13:59.721331 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-08 13:13:59.721342 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-08 13:13:59.721353 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-08 13:13:59.721364 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-08 13:13:59.721375 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-08 13:13:59.721386 | orchestrator | ++ export INTERACTIVE=false 2025-11-08 13:13:59.721397 | orchestrator | ++ INTERACTIVE=false 2025-11-08 13:13:59.721408 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-08 13:13:59.721422 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-08 13:13:59.721438 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-08 13:13:59.721450 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-08 13:13:59.721461 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-11-08 13:13:59.727936 | orchestrator | + set -e 2025-11-08 13:13:59.727958 | orchestrator | + VERSION=reef 2025-11-08 13:13:59.729284 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-11-08 13:13:59.735677 | orchestrator | + [[ -n ceph_version: reef ]] 2025-11-08 13:13:59.735704 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-11-08 13:13:59.741927 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-11-08 13:13:59.748459 | orchestrator | + set -e 2025-11-08 13:13:59.748483 | orchestrator | + VERSION=2024.2 2025-11-08 13:13:59.749459 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-11-08 13:13:59.753062 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-11-08 13:13:59.753083 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-11-08 13:13:59.758512 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-11-08 13:13:59.759673 | orchestrator | ++ semver latest 7.0.0 2025-11-08 13:13:59.822977 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-08 13:13:59.823064 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-08 13:13:59.823082 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-11-08 13:13:59.823095 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-11-08 13:13:59.921309 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-11-08 13:13:59.922623 | orchestrator | + source /opt/venv/bin/activate 2025-11-08 13:13:59.923944 | orchestrator | ++ deactivate nondestructive 2025-11-08 13:13:59.923966 | orchestrator | ++ '[' -n '' ']' 2025-11-08 13:13:59.923979 | orchestrator | ++ '[' -n '' ']' 2025-11-08 13:13:59.923996 | orchestrator | ++ hash -r 2025-11-08 13:13:59.924008 | orchestrator | ++ '[' -n '' ']' 2025-11-08 13:13:59.924019 | orchestrator | ++ unset VIRTUAL_ENV 2025-11-08 13:13:59.924030 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-11-08 13:13:59.924041 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-11-08 13:13:59.924188 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-11-08 13:13:59.924204 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-11-08 13:13:59.924218 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-11-08 13:13:59.924230 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-11-08 13:13:59.924242 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-08 13:13:59.924258 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-08 13:13:59.924270 | orchestrator | ++ export PATH 2025-11-08 13:13:59.924282 | orchestrator | ++ '[' -n '' ']' 2025-11-08 13:13:59.924339 | orchestrator | ++ '[' -z '' ']' 2025-11-08 13:13:59.924353 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-11-08 13:13:59.924364 | orchestrator | ++ PS1='(venv) ' 2025-11-08 13:13:59.924379 | orchestrator | ++ export PS1 2025-11-08 13:13:59.924390 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-11-08 13:13:59.924401 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-11-08 13:13:59.924412 | orchestrator | ++ hash -r 2025-11-08 13:13:59.924600 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-11-08 13:14:01.138493 | orchestrator | 2025-11-08 13:14:01.138609 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-11-08 13:14:01.138626 | orchestrator | 2025-11-08 13:14:01.138638 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-08 13:14:01.690426 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:01.690559 | orchestrator | 2025-11-08 13:14:01.690576 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-11-08 13:14:02.661227 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:02.661355 | orchestrator | 2025-11-08 13:14:02.661372 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-11-08 13:14:02.661384 | orchestrator | 2025-11-08 13:14:02.661395 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-08 13:14:05.976981 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:05.977101 | orchestrator | 2025-11-08 13:14:05.977116 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-11-08 13:14:06.038076 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:06.038117 | orchestrator | 2025-11-08 13:14:06.038135 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-11-08 13:14:06.494177 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:06.494274 | orchestrator | 2025-11-08 13:14:06.494288 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-11-08 13:14:06.537728 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:14:06.537747 | orchestrator | 2025-11-08 13:14:06.537759 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-11-08 13:14:06.864328 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:06.864415 | orchestrator | 2025-11-08 13:14:06.864428 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-11-08 13:14:06.920795 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:14:06.920815 | orchestrator | 2025-11-08 13:14:06.920848 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-11-08 13:14:07.255790 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:07.255910 | orchestrator | 2025-11-08 13:14:07.255923 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-11-08 13:14:07.377458 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:14:07.377531 | orchestrator | 2025-11-08 13:14:07.377548 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-11-08 13:14:07.377560 | orchestrator | 2025-11-08 13:14:07.377573 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-08 13:14:09.096643 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:09.096749 | orchestrator | 2025-11-08 13:14:09.096763 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-11-08 13:14:09.205252 | orchestrator | included: osism.services.traefik for testbed-manager 2025-11-08 13:14:09.205323 | orchestrator | 2025-11-08 13:14:09.205336 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-11-08 13:14:09.259788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-11-08 13:14:09.259842 | orchestrator | 2025-11-08 13:14:09.259857 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-11-08 13:14:10.381929 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-11-08 13:14:10.382105 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-11-08 13:14:10.382126 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-11-08 13:14:10.382139 | orchestrator | 2025-11-08 13:14:10.382153 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-11-08 13:14:12.153458 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-11-08 13:14:12.153585 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-11-08 13:14:12.153604 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-11-08 13:14:12.153618 | orchestrator | 2025-11-08 13:14:12.153631 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-11-08 13:14:12.797230 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-08 13:14:12.797352 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:12.797370 | orchestrator | 2025-11-08 13:14:12.797384 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-11-08 13:14:13.443723 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-08 13:14:13.443858 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:13.443876 | orchestrator | 2025-11-08 13:14:13.443889 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-11-08 13:14:13.500774 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:14:13.500820 | orchestrator | 2025-11-08 13:14:13.500861 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-11-08 13:14:13.847549 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:13.847646 | orchestrator | 2025-11-08 13:14:13.847662 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-11-08 13:14:13.915053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-11-08 13:14:13.915107 | orchestrator | 2025-11-08 13:14:13.915120 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-11-08 13:14:14.975322 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:14.975427 | orchestrator | 2025-11-08 13:14:14.975442 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-11-08 13:14:15.790317 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:15.790426 | orchestrator | 2025-11-08 13:14:15.790443 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-11-08 13:14:30.887509 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:30.887635 | orchestrator | 2025-11-08 13:14:30.887652 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-11-08 13:14:30.933760 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:14:30.933859 | orchestrator | 2025-11-08 13:14:30.933876 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-11-08 13:14:30.933888 | orchestrator | 2025-11-08 13:14:30.933900 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-08 13:14:32.702596 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:32.702712 | orchestrator | 2025-11-08 13:14:32.702769 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-11-08 13:14:32.823474 | orchestrator | included: osism.services.manager for testbed-manager 2025-11-08 13:14:32.823557 | orchestrator | 2025-11-08 13:14:32.823573 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-11-08 13:14:32.881681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-11-08 13:14:32.881746 | orchestrator | 2025-11-08 13:14:32.881760 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-11-08 13:14:35.343921 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:35.344034 | orchestrator | 2025-11-08 13:14:35.344052 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-11-08 13:14:35.395169 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:35.395276 | orchestrator | 2025-11-08 13:14:35.395295 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-11-08 13:14:35.517094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-11-08 13:14:35.517169 | orchestrator | 2025-11-08 13:14:35.517183 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-11-08 13:14:38.311812 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-11-08 13:14:38.311950 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-11-08 13:14:38.311966 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-11-08 13:14:38.311979 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-11-08 13:14:38.311991 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-11-08 13:14:38.312003 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-11-08 13:14:38.312014 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-11-08 13:14:38.312025 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-11-08 13:14:38.312036 | orchestrator | 2025-11-08 13:14:38.312049 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-11-08 13:14:38.936582 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:38.936673 | orchestrator | 2025-11-08 13:14:38.936689 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-11-08 13:14:39.577427 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:39.577517 | orchestrator | 2025-11-08 13:14:39.577533 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-11-08 13:14:39.660229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-11-08 13:14:39.660282 | orchestrator | 2025-11-08 13:14:39.660295 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-11-08 13:14:40.838993 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-11-08 13:14:40.839125 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-11-08 13:14:40.839141 | orchestrator | 2025-11-08 13:14:40.839155 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-11-08 13:14:41.462684 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:41.462791 | orchestrator | 2025-11-08 13:14:41.462807 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-11-08 13:14:41.515416 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:14:41.515487 | orchestrator | 2025-11-08 13:14:41.515501 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-11-08 13:14:41.584570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-11-08 13:14:41.584625 | orchestrator | 2025-11-08 13:14:41.584641 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-11-08 13:14:42.171563 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:42.171678 | orchestrator | 2025-11-08 13:14:42.171694 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-11-08 13:14:42.227558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-11-08 13:14:42.227640 | orchestrator | 2025-11-08 13:14:42.227655 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-11-08 13:14:43.551303 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-08 13:14:43.551420 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-08 13:14:43.551435 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:43.551449 | orchestrator | 2025-11-08 13:14:43.551462 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-11-08 13:14:44.163017 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:44.163137 | orchestrator | 2025-11-08 13:14:44.163155 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-11-08 13:14:44.216429 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:14:44.216488 | orchestrator | 2025-11-08 13:14:44.216504 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-11-08 13:14:44.315079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-11-08 13:14:44.315152 | orchestrator | 2025-11-08 13:14:44.315168 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-11-08 13:14:44.819416 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:44.819528 | orchestrator | 2025-11-08 13:14:44.819542 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-11-08 13:14:45.214325 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:45.214442 | orchestrator | 2025-11-08 13:14:45.214458 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-11-08 13:14:46.454187 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-11-08 13:14:46.454313 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-11-08 13:14:46.454330 | orchestrator | 2025-11-08 13:14:46.454361 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-11-08 13:14:47.082638 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:47.082753 | orchestrator | 2025-11-08 13:14:47.082770 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-11-08 13:14:47.485723 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:47.485814 | orchestrator | 2025-11-08 13:14:47.485853 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-11-08 13:14:47.832541 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:47.832652 | orchestrator | 2025-11-08 13:14:47.832665 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-11-08 13:14:47.881018 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:14:47.881043 | orchestrator | 2025-11-08 13:14:47.881054 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-11-08 13:14:47.947155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-11-08 13:14:47.947198 | orchestrator | 2025-11-08 13:14:47.947211 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-11-08 13:14:47.991932 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:47.991984 | orchestrator | 2025-11-08 13:14:47.991998 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-11-08 13:14:50.014909 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-11-08 13:14:50.015029 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-11-08 13:14:50.015045 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-11-08 13:14:50.015056 | orchestrator | 2025-11-08 13:14:50.015067 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-11-08 13:14:50.699570 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:50.699674 | orchestrator | 2025-11-08 13:14:50.699691 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-11-08 13:14:51.392055 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:51.392180 | orchestrator | 2025-11-08 13:14:51.392197 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-11-08 13:14:52.116179 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:52.116294 | orchestrator | 2025-11-08 13:14:52.116306 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-11-08 13:14:52.185588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-11-08 13:14:52.185618 | orchestrator | 2025-11-08 13:14:52.185627 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-11-08 13:14:52.225201 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:52.225219 | orchestrator | 2025-11-08 13:14:52.225227 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-11-08 13:14:52.935033 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-11-08 13:14:52.935123 | orchestrator | 2025-11-08 13:14:52.935133 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-11-08 13:14:53.012015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-11-08 13:14:53.012050 | orchestrator | 2025-11-08 13:14:53.012059 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-11-08 13:14:53.698195 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:53.698302 | orchestrator | 2025-11-08 13:14:53.698314 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-11-08 13:14:54.263091 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:54.263210 | orchestrator | 2025-11-08 13:14:54.263223 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-11-08 13:14:54.317958 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:14:54.318070 | orchestrator | 2025-11-08 13:14:54.318084 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-11-08 13:14:54.375461 | orchestrator | ok: [testbed-manager] 2025-11-08 13:14:54.375530 | orchestrator | 2025-11-08 13:14:54.375541 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-11-08 13:14:55.185049 | orchestrator | changed: [testbed-manager] 2025-11-08 13:14:55.185166 | orchestrator | 2025-11-08 13:14:55.185181 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-11-08 13:15:59.000355 | orchestrator | changed: [testbed-manager] 2025-11-08 13:15:59.000479 | orchestrator | 2025-11-08 13:15:59.000497 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-11-08 13:15:59.990222 | orchestrator | ok: [testbed-manager] 2025-11-08 13:15:59.990318 | orchestrator | 2025-11-08 13:15:59.990335 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-11-08 13:16:00.089228 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:16:00.089330 | orchestrator | 2025-11-08 13:16:00.089348 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-11-08 13:16:10.499331 | orchestrator | changed: [testbed-manager] 2025-11-08 13:16:10.499445 | orchestrator | 2025-11-08 13:16:10.499464 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-11-08 13:16:10.546338 | orchestrator | ok: [testbed-manager] 2025-11-08 13:16:10.546402 | orchestrator | 2025-11-08 13:16:10.546420 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-11-08 13:16:10.546433 | orchestrator | 2025-11-08 13:16:10.546445 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-11-08 13:16:10.598323 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:16:10.598390 | orchestrator | 2025-11-08 13:16:10.598405 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-11-08 13:17:10.650161 | orchestrator | Pausing for 60 seconds 2025-11-08 13:17:10.650293 | orchestrator | changed: [testbed-manager] 2025-11-08 13:17:10.650314 | orchestrator | 2025-11-08 13:17:10.650328 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-11-08 13:17:15.229127 | orchestrator | changed: [testbed-manager] 2025-11-08 13:17:15.229249 | orchestrator | 2025-11-08 13:17:15.229266 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-11-08 13:18:17.247077 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-11-08 13:18:17.247200 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-11-08 13:18:17.247216 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-11-08 13:18:17.247259 | orchestrator | changed: [testbed-manager] 2025-11-08 13:18:17.247274 | orchestrator | 2025-11-08 13:18:17.247286 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-11-08 13:18:27.402840 | orchestrator | changed: [testbed-manager] 2025-11-08 13:18:27.403023 | orchestrator | 2025-11-08 13:18:27.403042 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-11-08 13:18:27.471237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-11-08 13:18:27.471291 | orchestrator | 2025-11-08 13:18:27.471305 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-11-08 13:18:27.471316 | orchestrator | 2025-11-08 13:18:27.471344 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-11-08 13:18:27.526331 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:18:27.526355 | orchestrator | 2025-11-08 13:18:27.526367 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-11-08 13:18:27.592330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-11-08 13:18:27.592384 | orchestrator | 2025-11-08 13:18:27.592398 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-11-08 13:18:28.387825 | orchestrator | changed: [testbed-manager] 2025-11-08 13:18:28.387993 | orchestrator | 2025-11-08 13:18:28.388013 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-11-08 13:18:31.951714 | orchestrator | ok: [testbed-manager] 2025-11-08 13:18:31.951831 | orchestrator | 2025-11-08 13:18:31.951897 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-11-08 13:18:32.026786 | orchestrator | ok: [testbed-manager] => { 2025-11-08 13:18:32.026881 | orchestrator | "version_check_result.stdout_lines": [ 2025-11-08 13:18:32.026896 | orchestrator | "=== OSISM Container Version Check ===", 2025-11-08 13:18:32.026907 | orchestrator | "Checking running containers against expected versions...", 2025-11-08 13:18:32.026918 | orchestrator | "", 2025-11-08 13:18:32.026929 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-11-08 13:18:32.026940 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-11-08 13:18:32.026950 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.026960 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-11-08 13:18:32.026969 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.026980 | orchestrator | "", 2025-11-08 13:18:32.026990 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-11-08 13:18:32.027000 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-11-08 13:18:32.027009 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027019 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-11-08 13:18:32.027029 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027039 | orchestrator | "", 2025-11-08 13:18:32.027048 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-11-08 13:18:32.027058 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-11-08 13:18:32.027068 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027077 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-11-08 13:18:32.027087 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027097 | orchestrator | "", 2025-11-08 13:18:32.027107 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-11-08 13:18:32.027117 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-11-08 13:18:32.027127 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027137 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-11-08 13:18:32.027147 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027157 | orchestrator | "", 2025-11-08 13:18:32.027166 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-11-08 13:18:32.027200 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-11-08 13:18:32.027210 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027220 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-11-08 13:18:32.027230 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027239 | orchestrator | "", 2025-11-08 13:18:32.027249 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-11-08 13:18:32.027259 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027269 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027278 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027288 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027298 | orchestrator | "", 2025-11-08 13:18:32.027307 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-11-08 13:18:32.027317 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-11-08 13:18:32.027327 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027336 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-11-08 13:18:32.027346 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027356 | orchestrator | "", 2025-11-08 13:18:32.027376 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-11-08 13:18:32.027387 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-11-08 13:18:32.027399 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027410 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-11-08 13:18:32.027421 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027432 | orchestrator | "", 2025-11-08 13:18:32.027443 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-11-08 13:18:32.027454 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-11-08 13:18:32.027471 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027482 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-11-08 13:18:32.027494 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027505 | orchestrator | "", 2025-11-08 13:18:32.027516 | orchestrator | "Checking service: redis (Redis Cache)", 2025-11-08 13:18:32.027527 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-11-08 13:18:32.027538 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027549 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-11-08 13:18:32.027560 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027571 | orchestrator | "", 2025-11-08 13:18:32.027582 | orchestrator | "Checking service: api (OSISM API Service)", 2025-11-08 13:18:32.027593 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027604 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027615 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027626 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027637 | orchestrator | "", 2025-11-08 13:18:32.027648 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-11-08 13:18:32.027659 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027670 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027681 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027692 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027704 | orchestrator | "", 2025-11-08 13:18:32.027714 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-11-08 13:18:32.027724 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027734 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027743 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027753 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027763 | orchestrator | "", 2025-11-08 13:18:32.027773 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-11-08 13:18:32.027782 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027792 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027808 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027818 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027828 | orchestrator | "", 2025-11-08 13:18:32.027837 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-11-08 13:18:32.027878 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027889 | orchestrator | " Enabled: true", 2025-11-08 13:18:32.027899 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-08 13:18:32.027909 | orchestrator | " Status: ✅ MATCH", 2025-11-08 13:18:32.027918 | orchestrator | "", 2025-11-08 13:18:32.027928 | orchestrator | "=== Summary ===", 2025-11-08 13:18:32.027938 | orchestrator | "Errors (version mismatches): 0", 2025-11-08 13:18:32.027947 | orchestrator | "Warnings (expected containers not running): 0", 2025-11-08 13:18:32.027957 | orchestrator | "", 2025-11-08 13:18:32.027967 | orchestrator | "✅ All running containers match expected versions!" 2025-11-08 13:18:32.027977 | orchestrator | ] 2025-11-08 13:18:32.027987 | orchestrator | } 2025-11-08 13:18:32.027997 | orchestrator | 2025-11-08 13:18:32.028007 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-11-08 13:18:32.089450 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:18:32.089514 | orchestrator | 2025-11-08 13:18:32.089529 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:18:32.089543 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-11-08 13:18:32.089555 | orchestrator | 2025-11-08 13:18:32.204406 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-11-08 13:18:32.204533 | orchestrator | + deactivate 2025-11-08 13:18:32.204547 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-11-08 13:18:32.204556 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-08 13:18:32.204564 | orchestrator | + export PATH 2025-11-08 13:18:32.204571 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-11-08 13:18:32.204579 | orchestrator | + '[' -n '' ']' 2025-11-08 13:18:32.204586 | orchestrator | + hash -r 2025-11-08 13:18:32.204657 | orchestrator | + '[' -n '' ']' 2025-11-08 13:18:32.204667 | orchestrator | + unset VIRTUAL_ENV 2025-11-08 13:18:32.204674 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-11-08 13:18:32.204681 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-11-08 13:18:32.204688 | orchestrator | + unset -f deactivate 2025-11-08 13:18:32.204696 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-11-08 13:18:32.212055 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-08 13:18:32.212097 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-11-08 13:18:32.212106 | orchestrator | + local max_attempts=60 2025-11-08 13:18:32.212115 | orchestrator | + local name=ceph-ansible 2025-11-08 13:18:32.212122 | orchestrator | + local attempt_num=1 2025-11-08 13:18:32.214255 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:18:32.241261 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:18:32.241328 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-11-08 13:18:32.241343 | orchestrator | + local max_attempts=60 2025-11-08 13:18:32.241356 | orchestrator | + local name=kolla-ansible 2025-11-08 13:18:32.241368 | orchestrator | + local attempt_num=1 2025-11-08 13:18:32.242138 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-11-08 13:18:32.271271 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:18:32.271315 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-11-08 13:18:32.271323 | orchestrator | + local max_attempts=60 2025-11-08 13:18:32.271331 | orchestrator | + local name=osism-ansible 2025-11-08 13:18:32.271338 | orchestrator | + local attempt_num=1 2025-11-08 13:18:32.272410 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-11-08 13:18:32.305522 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:18:32.305577 | orchestrator | + [[ true == \t\r\u\e ]] 2025-11-08 13:18:32.305591 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-11-08 13:18:33.060351 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-11-08 13:18:33.263232 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-11-08 13:18:33.263363 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-11-08 13:18:33.263377 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-11-08 13:18:33.263389 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-11-08 13:18:33.263402 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-11-08 13:18:33.263414 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-11-08 13:18:33.263440 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-11-08 13:18:33.263452 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-11-08 13:18:33.263463 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-11-08 13:18:33.263474 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-11-08 13:18:33.263485 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-11-08 13:18:33.263496 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-11-08 13:18:33.263506 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-11-08 13:18:33.263517 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2025-11-08 13:18:33.263528 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-11-08 13:18:33.263539 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-11-08 13:18:33.270630 | orchestrator | ++ semver latest 7.0.0 2025-11-08 13:18:33.313542 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-08 13:18:33.313590 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-08 13:18:33.313602 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-11-08 13:18:33.317787 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-11-08 13:18:45.453064 | orchestrator | 2025-11-08 13:18:45 | INFO  | Task c660a346-5130-4728-ac99-23e99aa759ef (resolvconf) was prepared for execution. 2025-11-08 13:18:45.453174 | orchestrator | 2025-11-08 13:18:45 | INFO  | It takes a moment until task c660a346-5130-4728-ac99-23e99aa759ef (resolvconf) has been started and output is visible here. 2025-11-08 13:18:59.550358 | orchestrator | 2025-11-08 13:18:59.550480 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-11-08 13:18:59.550499 | orchestrator | 2025-11-08 13:18:59.550511 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-08 13:18:59.550523 | orchestrator | Saturday 08 November 2025 13:18:49 +0000 (0:00:00.123) 0:00:00.123 ***** 2025-11-08 13:18:59.550534 | orchestrator | ok: [testbed-manager] 2025-11-08 13:18:59.550548 | orchestrator | 2025-11-08 13:18:59.550560 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-11-08 13:18:59.550573 | orchestrator | Saturday 08 November 2025 13:18:52 +0000 (0:00:03.669) 0:00:03.792 ***** 2025-11-08 13:18:59.550584 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:18:59.550597 | orchestrator | 2025-11-08 13:18:59.550608 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-11-08 13:18:59.550620 | orchestrator | Saturday 08 November 2025 13:18:52 +0000 (0:00:00.063) 0:00:03.856 ***** 2025-11-08 13:18:59.550632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-11-08 13:18:59.550645 | orchestrator | 2025-11-08 13:18:59.550668 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-11-08 13:18:59.550680 | orchestrator | Saturday 08 November 2025 13:18:52 +0000 (0:00:00.092) 0:00:03.948 ***** 2025-11-08 13:18:59.550691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-11-08 13:18:59.550703 | orchestrator | 2025-11-08 13:18:59.550714 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-11-08 13:18:59.550725 | orchestrator | Saturday 08 November 2025 13:18:52 +0000 (0:00:00.076) 0:00:04.025 ***** 2025-11-08 13:18:59.550737 | orchestrator | ok: [testbed-manager] 2025-11-08 13:18:59.550748 | orchestrator | 2025-11-08 13:18:59.550759 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-11-08 13:18:59.550771 | orchestrator | Saturday 08 November 2025 13:18:54 +0000 (0:00:01.109) 0:00:05.134 ***** 2025-11-08 13:18:59.550782 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:18:59.550794 | orchestrator | 2025-11-08 13:18:59.550805 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-11-08 13:18:59.550816 | orchestrator | Saturday 08 November 2025 13:18:54 +0000 (0:00:00.062) 0:00:05.197 ***** 2025-11-08 13:18:59.550828 | orchestrator | ok: [testbed-manager] 2025-11-08 13:18:59.550839 | orchestrator | 2025-11-08 13:18:59.550879 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-11-08 13:18:59.550893 | orchestrator | Saturday 08 November 2025 13:18:55 +0000 (0:00:01.515) 0:00:06.713 ***** 2025-11-08 13:18:59.550906 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:18:59.550919 | orchestrator | 2025-11-08 13:18:59.550932 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-11-08 13:18:59.550946 | orchestrator | Saturday 08 November 2025 13:18:55 +0000 (0:00:00.070) 0:00:06.783 ***** 2025-11-08 13:18:59.551009 | orchestrator | changed: [testbed-manager] 2025-11-08 13:18:59.551021 | orchestrator | 2025-11-08 13:18:59.551033 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-11-08 13:18:59.551045 | orchestrator | Saturday 08 November 2025 13:18:56 +0000 (0:00:00.579) 0:00:07.362 ***** 2025-11-08 13:18:59.551058 | orchestrator | changed: [testbed-manager] 2025-11-08 13:18:59.551070 | orchestrator | 2025-11-08 13:18:59.551081 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-11-08 13:18:59.551092 | orchestrator | Saturday 08 November 2025 13:18:57 +0000 (0:00:01.072) 0:00:08.435 ***** 2025-11-08 13:18:59.551103 | orchestrator | ok: [testbed-manager] 2025-11-08 13:18:59.551114 | orchestrator | 2025-11-08 13:18:59.551125 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-11-08 13:18:59.551162 | orchestrator | Saturday 08 November 2025 13:18:58 +0000 (0:00:00.897) 0:00:09.332 ***** 2025-11-08 13:18:59.551174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-11-08 13:18:59.551184 | orchestrator | 2025-11-08 13:18:59.551195 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-11-08 13:18:59.551206 | orchestrator | Saturday 08 November 2025 13:18:58 +0000 (0:00:00.078) 0:00:09.411 ***** 2025-11-08 13:18:59.551217 | orchestrator | changed: [testbed-manager] 2025-11-08 13:18:59.551228 | orchestrator | 2025-11-08 13:18:59.551238 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:18:59.551250 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-08 13:18:59.551262 | orchestrator | 2025-11-08 13:18:59.551273 | orchestrator | 2025-11-08 13:18:59.551284 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:18:59.551294 | orchestrator | Saturday 08 November 2025 13:18:59 +0000 (0:00:01.014) 0:00:10.426 ***** 2025-11-08 13:18:59.551305 | orchestrator | =============================================================================== 2025-11-08 13:18:59.551316 | orchestrator | Gathering Facts --------------------------------------------------------- 3.67s 2025-11-08 13:18:59.551327 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 1.52s 2025-11-08 13:18:59.551337 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2025-11-08 13:18:59.551348 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2025-11-08 13:18:59.551359 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.01s 2025-11-08 13:18:59.551370 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.90s 2025-11-08 13:18:59.551401 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2025-11-08 13:18:59.551413 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-11-08 13:18:59.551424 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-11-08 13:18:59.551435 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-11-08 13:18:59.551452 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-11-08 13:18:59.551463 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-11-08 13:18:59.551475 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-11-08 13:18:59.741184 | orchestrator | + osism apply sshconfig 2025-11-08 13:19:11.539598 | orchestrator | 2025-11-08 13:19:11 | INFO  | Task e60173e2-9e73-49af-ba88-23600444ad96 (sshconfig) was prepared for execution. 2025-11-08 13:19:11.539777 | orchestrator | 2025-11-08 13:19:11 | INFO  | It takes a moment until task e60173e2-9e73-49af-ba88-23600444ad96 (sshconfig) has been started and output is visible here. 2025-11-08 13:19:23.167936 | orchestrator | 2025-11-08 13:19:23.233989 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-11-08 13:19:23.234086 | orchestrator | 2025-11-08 13:19:23.234100 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-11-08 13:19:23.234112 | orchestrator | Saturday 08 November 2025 13:19:15 +0000 (0:00:00.158) 0:00:00.158 ***** 2025-11-08 13:19:23.234123 | orchestrator | ok: [testbed-manager] 2025-11-08 13:19:23.234137 | orchestrator | 2025-11-08 13:19:23.234148 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-11-08 13:19:23.234161 | orchestrator | Saturday 08 November 2025 13:19:16 +0000 (0:00:00.574) 0:00:00.733 ***** 2025-11-08 13:19:23.234172 | orchestrator | changed: [testbed-manager] 2025-11-08 13:19:23.234185 | orchestrator | 2025-11-08 13:19:23.234196 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-11-08 13:19:23.234234 | orchestrator | Saturday 08 November 2025 13:19:16 +0000 (0:00:00.539) 0:00:01.273 ***** 2025-11-08 13:19:23.234246 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-11-08 13:19:23.234258 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-11-08 13:19:23.234269 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-11-08 13:19:23.234281 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-11-08 13:19:23.234292 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-11-08 13:19:23.234303 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-11-08 13:19:23.234314 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-11-08 13:19:23.234325 | orchestrator | 2025-11-08 13:19:23.234336 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-11-08 13:19:23.234348 | orchestrator | Saturday 08 November 2025 13:19:22 +0000 (0:00:05.577) 0:00:06.851 ***** 2025-11-08 13:19:23.234359 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:19:23.234370 | orchestrator | 2025-11-08 13:19:23.234381 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-11-08 13:19:23.234392 | orchestrator | Saturday 08 November 2025 13:19:22 +0000 (0:00:00.068) 0:00:06.919 ***** 2025-11-08 13:19:23.234403 | orchestrator | changed: [testbed-manager] 2025-11-08 13:19:23.234415 | orchestrator | 2025-11-08 13:19:23.234426 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:19:23.234439 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:19:23.234454 | orchestrator | 2025-11-08 13:19:23.234465 | orchestrator | 2025-11-08 13:19:23.234477 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:19:23.234488 | orchestrator | Saturday 08 November 2025 13:19:22 +0000 (0:00:00.583) 0:00:07.502 ***** 2025-11-08 13:19:23.234500 | orchestrator | =============================================================================== 2025-11-08 13:19:23.234512 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.58s 2025-11-08 13:19:23.234523 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-11-08 13:19:23.234534 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2025-11-08 13:19:23.234546 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2025-11-08 13:19:23.234558 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-11-08 13:19:23.443167 | orchestrator | + osism apply known-hosts 2025-11-08 13:19:35.517186 | orchestrator | 2025-11-08 13:19:35 | INFO  | Task 7d3efdde-6699-4130-95da-ea639a015d97 (known-hosts) was prepared for execution. 2025-11-08 13:19:35.517268 | orchestrator | 2025-11-08 13:19:35 | INFO  | It takes a moment until task 7d3efdde-6699-4130-95da-ea639a015d97 (known-hosts) has been started and output is visible here. 2025-11-08 13:19:52.010417 | orchestrator | 2025-11-08 13:19:52.010467 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-11-08 13:19:52.010475 | orchestrator | 2025-11-08 13:19:52.010480 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-11-08 13:19:52.010487 | orchestrator | Saturday 08 November 2025 13:19:39 +0000 (0:00:00.162) 0:00:00.162 ***** 2025-11-08 13:19:52.010493 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-11-08 13:19:52.010498 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-11-08 13:19:52.010503 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-11-08 13:19:52.010508 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-11-08 13:19:52.010513 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-11-08 13:19:52.010518 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-11-08 13:19:52.010534 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-11-08 13:19:52.010539 | orchestrator | 2025-11-08 13:19:52.010550 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-11-08 13:19:52.010556 | orchestrator | Saturday 08 November 2025 13:19:45 +0000 (0:00:05.982) 0:00:06.145 ***** 2025-11-08 13:19:52.010562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-11-08 13:19:52.010569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-11-08 13:19:52.010574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-11-08 13:19:52.010579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-11-08 13:19:52.010584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-11-08 13:19:52.010589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-11-08 13:19:52.010594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-11-08 13:19:52.010598 | orchestrator | 2025-11-08 13:19:52.010603 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:19:52.010608 | orchestrator | Saturday 08 November 2025 13:19:45 +0000 (0:00:00.145) 0:00:06.291 ***** 2025-11-08 13:19:52.010614 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKXcTzIiuTHKDyzdIih0eBTmgCrYoZMr+b97XBK6qMC8TpI+3ImPNY7enNMgdJx8NTLYRlN7MPy62SU27/cJhSc=) 2025-11-08 13:19:52.010620 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDlz//OIvVqA9uqf3MdWAztxVQJyJnoP2fUBGyrrV9v+) 2025-11-08 13:19:52.010629 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuH3urCeW+26a3xt/IIOZk9ngt9IwEBrxUTWqt1ZlFjI3uJWnVm2k3oCMlM39T2s8Rvtt7SfvG52me9Vl1lq0M40EBcdvrXrMkp8U840el4yUyw0/vEd2rWk2hlbMfLU4DmtYoIaYRAZ/2B2v6fndDS0krLKQIjMWNC5MDI/73j33s4fB+6qf00B8Zk8/1hG0c3O8IQhj/cT6ilP8HLWFajy4YIsElHP3Avz+N7SsLHrcZnRpp5axFg2NLdwCc/9G4UhvLweyIxzeLsavSUyuBC8F8MvPNOzeWTgdT78vZXHlbDvilutohaatKi8GGobfE8Fy8v2cLZ3ecYPb0WnYlun3WfPvMMTiWaiiBNjKSei01CAN8PThTW6vGg20OrGvW01JJc/c8I3wZngaWbf1BsWl5jov72fmd3y8TDcV5sNpR1NTTk73o2TafIQ2xP8rRydHiutqm1SGDXU+Raff1LjnvoMRf36D8aRzNSLb+K66zuihU8xFI9up3yqCgAZ0=) 2025-11-08 13:19:52.010636 | orchestrator | 2025-11-08 13:19:52.010641 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:19:52.010646 | orchestrator | Saturday 08 November 2025 13:19:46 +0000 (0:00:01.179) 0:00:07.471 ***** 2025-11-08 13:19:52.010651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKPsTCxU2fkcHJnHE2SEgSMmQLdtxEzsHv1wcBQgbH4SSzUzB/HJr3OHsO+1d4mjZWwfZKmqF+97haF9tFE9Gw0=) 2025-11-08 13:19:52.010656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA0W9l0d6TwDL0xQk6n5vyecG+8mKeubfkhMc38yDKkt) 2025-11-08 13:19:52.010675 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/GiE6nXIcTlY4jeek5L0Ky1/6OiRTEa5Erelip0sES36d12lcWkvCRsi7RoXpRV6Qf0XIoHRjbXPEBbSiNCmJRiUxDGs5Q+XANneeKS/yJ3nDpb0eNyJm+CLqNvAQBo0BTPL7qWH4txF/ymMC7iR2zk1Holo2mAvJgtDwWZDx1TLOWhTem2Ip569qQHR93UHVWSmQR+tiUC7Si+OSZInMM13FT17NBG9eX/EYNhrpe7GEe8t6QEGS3ZqmTVtZ9F1Tuuk0ejcb6kWFglL5eFNMdkIC9XH1Cj8r6h9LcWIcVaOyvqdm9LZPoN4SzgwrHD3PXYUNEwvvsiXmfLHrnkiUQQRLccC4Uw87ueqPEI4332fi8AEDsjQ39Ms3EHq++iQQKWeig9JXDKDEeS+C5pm4PkAlYKo+nfx9WEz3IdWWRENx/+FHlN96GGTv+BReqdvRu+H8yZt9htwPRdeoluTzcXhB+nLaEpmXjfOtYfGE5eO1tWRYkGK8dYbBbOwA7Kk=) 2025-11-08 13:19:52.010686 | orchestrator | 2025-11-08 13:19:52.010691 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:19:52.010696 | orchestrator | Saturday 08 November 2025 13:19:47 +0000 (0:00:01.040) 0:00:08.511 ***** 2025-11-08 13:19:52.010701 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDLLrWqBD2w3rLkguqVi1chD3PQ/5MhmovAT2iARxMHdUJa5dEwBbcMBZOBDMBQDy9eBCzRPIPqm9irrYq/qZHc=) 2025-11-08 13:19:52.010706 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEkOYIOrzHDgDNqeA0U5t1KojKbwoCJVBFSNhBSdNTVG) 2025-11-08 13:19:52.010752 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgAakCsV+XOecvfyusNhHO1GDCIWa5ExtPjQcsB/Xc89vZYEMlL2e6JzwYyY0Z4CYFQE53o6ZFOS3kyE6QmX7vKpFWKAmddWlOaTzrrJC8uvjGC+hssWuTrwLZGIMr/PW+rdwh7irwX1idG63LAhoz8GuMWbMTb5cORZ1ECXZB0JOefn2jB+oTNOL4fGUqZhORwdXUG/y38KMDmJ6Cy8wAwoNRisWBcv5awRttnGaE1+7DLRpfGckOGj7haqh5zZ/arCx3Ey1CWgoH4JRmeBqidEn5wsXza3RKQ8CgRlCINulg5GAasE26EmrGkDtPqN45ftUWY2jTIQPwwskrWnacHuyrC5tr5Qy9VruvZc0sDy/kmFLqWus6naoeWXlJCxicfJnVIIy38U1hoh8kiv9m0tAdojZMWaPQMRbK9G6U7GQowqFZuxPvCYOsxSHe6KVi35pEO7+uN5uRfmbPTD7yySe7nuVMpzShtXuh8J/GRvBTZwtIVWQEiRWeZHqJE7M=) 2025-11-08 13:19:52.010757 | orchestrator | 2025-11-08 13:19:52.010761 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:19:52.010766 | orchestrator | Saturday 08 November 2025 13:19:48 +0000 (0:00:01.068) 0:00:09.580 ***** 2025-11-08 13:19:52.010771 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCji6UU+oXZxpNj7C/8Fv9nnJny/MUAOPDGyo0bu0A6GhVfRUCBFx2+97Qxj3Sy1j2YFardNUqw9dinnNBeK0PlYt0pjLC9oLfhrjHjnJ2tlnkyl9yTNY7sAcF8BJ4UTkOIIarLUMgGUkCu7YwqMvqLTee94N6pB39bkngE1sfJgY15V54u+2HxcJvT6Jfa1g1+mgFvC3B8LWA1cGZy/lZDCNFih6u3j6N+0D84avKi0JKNJkDgbuTAE4VXEL0ZCyuuydcnvV6NeO3tYNf7Id8gB3FaT/cwuHIk4Sop2xTCmiuu4wBVLS2mJByU505DcmKwr4FrdzpeiUqEzAuYipIPDWX7lXj+v+hy2m6oQ2x7AWUjzFWaZEDgsSFHO/LE9lIs3YNK+2GvLDpfaxJA/KiTm9zygG3J8tjukc15Rh9zZlBI72KYGQtd6I35u5IXnuDscLPwnp6XMTcynF4YRAECXXA8ggQU+PrX1qpq6Fw8xvHFsFmErH6B4oboFefEIrs=) 2025-11-08 13:19:52.010776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMSY0RndBSYdMKRr7jqpVcHQ4OAuHFDZ1/dEgiF+M1hqP+91niSjO/vVEXwtPvGDYi6NLb7rK/AxjYMOkE8VYRU=) 2025-11-08 13:19:52.010781 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtmuPpLR+xxEsWZKwqoywXJ7fO3pYABEuS1s2Vz1mh/) 2025-11-08 13:19:52.010786 | orchestrator | 2025-11-08 13:19:52.010791 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:19:52.010796 | orchestrator | Saturday 08 November 2025 13:19:49 +0000 (0:00:01.033) 0:00:10.613 ***** 2025-11-08 13:19:52.010800 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO1VQM0i8HgZtbxWQZiOXq901mVWMI6pbTp5xashQhB0h7WiAP/DPsTGp7QtZPFsCmOES3ky3rm7ehQpqDU5p6E=) 2025-11-08 13:19:52.010805 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJNripDiwJbxb79PxoyhtEKa0hY+1x896zlldJ2cCOgWkCz0KKFhYXfGggAUcJMo9In+LCRd+mxG4J7nIqSH6pwzGcXRFEY+xl9qAt+RqOMfV+nQhoSxa27Vbnxc7PcazOo/NYkb9vKjiKJ9uMddcSXVTuB/zCh8EdhT+apzJ4j7w5zQ1djBhFFD07Rb0NTej6Rp1Aspv6f1o5kW0hYYvTIEXBTLTUSa73/auePbikuGSOMSz5Nu6YuPIsx596I4YzlyJ7k5fu6H3YpIK/H8zPATK9d/KAcVwYNWn7Jd9LG81csvLFWodzcwbNrtwtXZh/y6yzAhG8zmHSwRa8heCi21Ejw9Eq58A+KU8dEM79TeCGeP4/R5hiVLSJeULbFuQQLbVu41ZCjETMwjacd1FAfepcclp76P+gVyq1TQKEsZrhT5e4l89hBmRjXoDCCHV4VtSDv5nKyqw2+3do4Jkrg4a1YLIk5NtfChACk3uD70wdIyhWsT7+Zi9apTjSO7c=) 2025-11-08 13:19:52.010815 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIL/mkELh3hSyLCEixc8LrOlGBml1ZniHrIg0eBDsXkD) 2025-11-08 13:19:52.010820 | orchestrator | 2025-11-08 13:19:52.010824 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:19:52.010829 | orchestrator | Saturday 08 November 2025 13:19:50 +0000 (0:00:01.009) 0:00:11.623 ***** 2025-11-08 13:19:52.010837 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC+oHK5p4HOjGr9NijiYjSANysJ4peX2uGKOdGG3fBtZkzC1Y62eSvUgcNqWcr6nqxMFzOrJYn3qKc8oMdf3cYM=) 2025-11-08 13:20:02.030068 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnUBW31rSqwyZPeVjiBJmiAgNoO3p0mh4lH+Zl77P9z7nn6NixFFfHkyxEo/zVbUUSlhYb7wv5RpMbIWhZVafzrwSAbfI9wU/VjPCs2oiaSncVkUuRRC5CvpYYiIyxCXnV9O3ONMmoT4sD6wHVTiiqI5WK727Tb2Xm7RYjeTLP+CYA4ayuaNNhsqod4TzRt3c3lSTbv0T3hqv6aPsQUxitXCut6Gp/tf6T+y8Pqlq4ibogXqyCKWHFJ0DU8jdzqjKFFg73epRGRpfcTFcOv8FBTCMGhO/1gnL0/h2QGCJW1sa8sBMAiCWHMaTqMiuHtpxGyhDukkX0qRh387chxVp2CVRzZjtDi1X3o9PpPfM1AhEFK2lL+bX8y2+f3VtERccDnC1QWmqD8nhRifFx4DGbaY5iHQ/ALbJvH/xjB0YKVFi8KFYmFj63zCE81H40SyuQZtp5mNF5JgAd39UmTsz9prRUaM17cPNrPOSiLdtJu32z+bTguOCPGM1i389t9sk=) 2025-11-08 13:20:02.030153 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOUAF4cDRGl2iNeLJXpYYdmtbgAvRW14y4aYAvOqfUDp) 2025-11-08 13:20:02.030168 | orchestrator | 2025-11-08 13:20:02.030181 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:20:02.030193 | orchestrator | Saturday 08 November 2025 13:19:51 +0000 (0:00:01.006) 0:00:12.630 ***** 2025-11-08 13:20:02.030203 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPExBrM1YgK1o6aKGeVzUVgBpbKBc+DU7gFyOcFR6AZv) 2025-11-08 13:20:02.030214 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEIOZwVc/YYV4ZEaPRmZB4zjccjas9J3caqeix/FaQPjwhr0JtLm3hiZOSPfcntAVXTS6z7mhpNZhtNLMo5xFMeUULz3NFFO97K1cq5lwyg4Y1PVDmRoBqj7YgRRGyId3ef89gSPB/YvEwvbGwp+BJwqRGCnImL+7Zl5AeyDW6bj4wdjNv7nUOfGg6Z3Faib3mjEy8fxVat2RwECeZV2dn1hIMgdJBXppowkUZpH+vMIg4O9Dqtf2Oa1agyDuZUHGle5b8fhCidkr+mzo13Ew4rVEUYQsbZj1xfUwcjSpgL0qgZJ+dp3ugdReeWvGPe/zaJh3OuJhFnZDlNQcwBgL8XiGspm8WeVfqOY1FLY/UXGiRxswZ3sgqKvtXWJJ/0tKdJIUccXo/N+P9RDf7bWMSqqVYajVcBdtK3N51/ngP1msZvUfvJLkp7D3czb2YqzOOuOK0NS/0CaI96QGzDAZ9c5DFD0oYX8ngY4brodsdhfG/cpCUhVuPdX5xpOmM24c=) 2025-11-08 13:20:02.030226 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM87nelA5cS9mfX7T9YeXFfAU1BIjWyTizxJZrhE6ibxOcSVyaMGGcUf02QB0tTBR4C11dSRhbdCMnANEU2m8UE=) 2025-11-08 13:20:02.030237 | orchestrator | 2025-11-08 13:20:02.030247 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-11-08 13:20:02.030258 | orchestrator | Saturday 08 November 2025 13:19:53 +0000 (0:00:01.042) 0:00:13.672 ***** 2025-11-08 13:20:02.030268 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-11-08 13:20:02.030279 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-11-08 13:20:02.030289 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-11-08 13:20:02.030299 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-11-08 13:20:02.030310 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-11-08 13:20:02.030332 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-11-08 13:20:02.030347 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-11-08 13:20:02.030357 | orchestrator | 2025-11-08 13:20:02.030367 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-11-08 13:20:02.030378 | orchestrator | Saturday 08 November 2025 13:19:57 +0000 (0:00:04.938) 0:00:18.610 ***** 2025-11-08 13:20:02.030407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-11-08 13:20:02.030420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-11-08 13:20:02.030429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-11-08 13:20:02.030439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-11-08 13:20:02.030449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-11-08 13:20:02.030459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-11-08 13:20:02.030469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-11-08 13:20:02.030479 | orchestrator | 2025-11-08 13:20:02.030502 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:20:02.030512 | orchestrator | Saturday 08 November 2025 13:19:58 +0000 (0:00:00.144) 0:00:18.755 ***** 2025-11-08 13:20:02.030522 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKXcTzIiuTHKDyzdIih0eBTmgCrYoZMr+b97XBK6qMC8TpI+3ImPNY7enNMgdJx8NTLYRlN7MPy62SU27/cJhSc=) 2025-11-08 13:20:02.030535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuH3urCeW+26a3xt/IIOZk9ngt9IwEBrxUTWqt1ZlFjI3uJWnVm2k3oCMlM39T2s8Rvtt7SfvG52me9Vl1lq0M40EBcdvrXrMkp8U840el4yUyw0/vEd2rWk2hlbMfLU4DmtYoIaYRAZ/2B2v6fndDS0krLKQIjMWNC5MDI/73j33s4fB+6qf00B8Zk8/1hG0c3O8IQhj/cT6ilP8HLWFajy4YIsElHP3Avz+N7SsLHrcZnRpp5axFg2NLdwCc/9G4UhvLweyIxzeLsavSUyuBC8F8MvPNOzeWTgdT78vZXHlbDvilutohaatKi8GGobfE8Fy8v2cLZ3ecYPb0WnYlun3WfPvMMTiWaiiBNjKSei01CAN8PThTW6vGg20OrGvW01JJc/c8I3wZngaWbf1BsWl5jov72fmd3y8TDcV5sNpR1NTTk73o2TafIQ2xP8rRydHiutqm1SGDXU+Raff1LjnvoMRf36D8aRzNSLb+K66zuihU8xFI9up3yqCgAZ0=) 2025-11-08 13:20:02.030545 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDlz//OIvVqA9uqf3MdWAztxVQJyJnoP2fUBGyrrV9v+) 2025-11-08 13:20:02.030555 | orchestrator | 2025-11-08 13:20:02.030565 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:20:02.030575 | orchestrator | Saturday 08 November 2025 13:19:59 +0000 (0:00:00.957) 0:00:19.712 ***** 2025-11-08 13:20:02.030586 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/GiE6nXIcTlY4jeek5L0Ky1/6OiRTEa5Erelip0sES36d12lcWkvCRsi7RoXpRV6Qf0XIoHRjbXPEBbSiNCmJRiUxDGs5Q+XANneeKS/yJ3nDpb0eNyJm+CLqNvAQBo0BTPL7qWH4txF/ymMC7iR2zk1Holo2mAvJgtDwWZDx1TLOWhTem2Ip569qQHR93UHVWSmQR+tiUC7Si+OSZInMM13FT17NBG9eX/EYNhrpe7GEe8t6QEGS3ZqmTVtZ9F1Tuuk0ejcb6kWFglL5eFNMdkIC9XH1Cj8r6h9LcWIcVaOyvqdm9LZPoN4SzgwrHD3PXYUNEwvvsiXmfLHrnkiUQQRLccC4Uw87ueqPEI4332fi8AEDsjQ39Ms3EHq++iQQKWeig9JXDKDEeS+C5pm4PkAlYKo+nfx9WEz3IdWWRENx/+FHlN96GGTv+BReqdvRu+H8yZt9htwPRdeoluTzcXhB+nLaEpmXjfOtYfGE5eO1tWRYkGK8dYbBbOwA7Kk=) 2025-11-08 13:20:02.030597 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKPsTCxU2fkcHJnHE2SEgSMmQLdtxEzsHv1wcBQgbH4SSzUzB/HJr3OHsO+1d4mjZWwfZKmqF+97haF9tFE9Gw0=) 2025-11-08 13:20:02.030610 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA0W9l0d6TwDL0xQk6n5vyecG+8mKeubfkhMc38yDKkt) 2025-11-08 13:20:02.030629 | orchestrator | 2025-11-08 13:20:02.030641 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:20:02.030652 | orchestrator | Saturday 08 November 2025 13:20:00 +0000 (0:00:00.968) 0:00:20.681 ***** 2025-11-08 13:20:02.030663 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgAakCsV+XOecvfyusNhHO1GDCIWa5ExtPjQcsB/Xc89vZYEMlL2e6JzwYyY0Z4CYFQE53o6ZFOS3kyE6QmX7vKpFWKAmddWlOaTzrrJC8uvjGC+hssWuTrwLZGIMr/PW+rdwh7irwX1idG63LAhoz8GuMWbMTb5cORZ1ECXZB0JOefn2jB+oTNOL4fGUqZhORwdXUG/y38KMDmJ6Cy8wAwoNRisWBcv5awRttnGaE1+7DLRpfGckOGj7haqh5zZ/arCx3Ey1CWgoH4JRmeBqidEn5wsXza3RKQ8CgRlCINulg5GAasE26EmrGkDtPqN45ftUWY2jTIQPwwskrWnacHuyrC5tr5Qy9VruvZc0sDy/kmFLqWus6naoeWXlJCxicfJnVIIy38U1hoh8kiv9m0tAdojZMWaPQMRbK9G6U7GQowqFZuxPvCYOsxSHe6KVi35pEO7+uN5uRfmbPTD7yySe7nuVMpzShtXuh8J/GRvBTZwtIVWQEiRWeZHqJE7M=) 2025-11-08 13:20:02.030675 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDLLrWqBD2w3rLkguqVi1chD3PQ/5MhmovAT2iARxMHdUJa5dEwBbcMBZOBDMBQDy9eBCzRPIPqm9irrYq/qZHc=) 2025-11-08 13:20:02.030687 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEkOYIOrzHDgDNqeA0U5t1KojKbwoCJVBFSNhBSdNTVG) 2025-11-08 13:20:02.030698 | orchestrator | 2025-11-08 13:20:02.030709 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:20:02.030721 | orchestrator | Saturday 08 November 2025 13:20:01 +0000 (0:00:01.005) 0:00:21.687 ***** 2025-11-08 13:20:02.030732 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtmuPpLR+xxEsWZKwqoywXJ7fO3pYABEuS1s2Vz1mh/) 2025-11-08 13:20:02.030763 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCji6UU+oXZxpNj7C/8Fv9nnJny/MUAOPDGyo0bu0A6GhVfRUCBFx2+97Qxj3Sy1j2YFardNUqw9dinnNBeK0PlYt0pjLC9oLfhrjHjnJ2tlnkyl9yTNY7sAcF8BJ4UTkOIIarLUMgGUkCu7YwqMvqLTee94N6pB39bkngE1sfJgY15V54u+2HxcJvT6Jfa1g1+mgFvC3B8LWA1cGZy/lZDCNFih6u3j6N+0D84avKi0JKNJkDgbuTAE4VXEL0ZCyuuydcnvV6NeO3tYNf7Id8gB3FaT/cwuHIk4Sop2xTCmiuu4wBVLS2mJByU505DcmKwr4FrdzpeiUqEzAuYipIPDWX7lXj+v+hy2m6oQ2x7AWUjzFWaZEDgsSFHO/LE9lIs3YNK+2GvLDpfaxJA/KiTm9zygG3J8tjukc15Rh9zZlBI72KYGQtd6I35u5IXnuDscLPwnp6XMTcynF4YRAECXXA8ggQU+PrX1qpq6Fw8xvHFsFmErH6B4oboFefEIrs=) 2025-11-08 13:20:06.295625 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMSY0RndBSYdMKRr7jqpVcHQ4OAuHFDZ1/dEgiF+M1hqP+91niSjO/vVEXwtPvGDYi6NLb7rK/AxjYMOkE8VYRU=) 2025-11-08 13:20:06.295713 | orchestrator | 2025-11-08 13:20:06.295729 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:20:06.295743 | orchestrator | Saturday 08 November 2025 13:20:02 +0000 (0:00:00.966) 0:00:22.653 ***** 2025-11-08 13:20:06.295757 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJNripDiwJbxb79PxoyhtEKa0hY+1x896zlldJ2cCOgWkCz0KKFhYXfGggAUcJMo9In+LCRd+mxG4J7nIqSH6pwzGcXRFEY+xl9qAt+RqOMfV+nQhoSxa27Vbnxc7PcazOo/NYkb9vKjiKJ9uMddcSXVTuB/zCh8EdhT+apzJ4j7w5zQ1djBhFFD07Rb0NTej6Rp1Aspv6f1o5kW0hYYvTIEXBTLTUSa73/auePbikuGSOMSz5Nu6YuPIsx596I4YzlyJ7k5fu6H3YpIK/H8zPATK9d/KAcVwYNWn7Jd9LG81csvLFWodzcwbNrtwtXZh/y6yzAhG8zmHSwRa8heCi21Ejw9Eq58A+KU8dEM79TeCGeP4/R5hiVLSJeULbFuQQLbVu41ZCjETMwjacd1FAfepcclp76P+gVyq1TQKEsZrhT5e4l89hBmRjXoDCCHV4VtSDv5nKyqw2+3do4Jkrg4a1YLIk5NtfChACk3uD70wdIyhWsT7+Zi9apTjSO7c=) 2025-11-08 13:20:06.295771 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO1VQM0i8HgZtbxWQZiOXq901mVWMI6pbTp5xashQhB0h7WiAP/DPsTGp7QtZPFsCmOES3ky3rm7ehQpqDU5p6E=) 2025-11-08 13:20:06.295782 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIL/mkELh3hSyLCEixc8LrOlGBml1ZniHrIg0eBDsXkD) 2025-11-08 13:20:06.295794 | orchestrator | 2025-11-08 13:20:06.295805 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:20:06.295841 | orchestrator | Saturday 08 November 2025 13:20:03 +0000 (0:00:01.068) 0:00:23.722 ***** 2025-11-08 13:20:06.295853 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnUBW31rSqwyZPeVjiBJmiAgNoO3p0mh4lH+Zl77P9z7nn6NixFFfHkyxEo/zVbUUSlhYb7wv5RpMbIWhZVafzrwSAbfI9wU/VjPCs2oiaSncVkUuRRC5CvpYYiIyxCXnV9O3ONMmoT4sD6wHVTiiqI5WK727Tb2Xm7RYjeTLP+CYA4ayuaNNhsqod4TzRt3c3lSTbv0T3hqv6aPsQUxitXCut6Gp/tf6T+y8Pqlq4ibogXqyCKWHFJ0DU8jdzqjKFFg73epRGRpfcTFcOv8FBTCMGhO/1gnL0/h2QGCJW1sa8sBMAiCWHMaTqMiuHtpxGyhDukkX0qRh387chxVp2CVRzZjtDi1X3o9PpPfM1AhEFK2lL+bX8y2+f3VtERccDnC1QWmqD8nhRifFx4DGbaY5iHQ/ALbJvH/xjB0YKVFi8KFYmFj63zCE81H40SyuQZtp5mNF5JgAd39UmTsz9prRUaM17cPNrPOSiLdtJu32z+bTguOCPGM1i389t9sk=) 2025-11-08 13:20:06.295908 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC+oHK5p4HOjGr9NijiYjSANysJ4peX2uGKOdGG3fBtZkzC1Y62eSvUgcNqWcr6nqxMFzOrJYn3qKc8oMdf3cYM=) 2025-11-08 13:20:06.295920 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOUAF4cDRGl2iNeLJXpYYdmtbgAvRW14y4aYAvOqfUDp) 2025-11-08 13:20:06.295931 | orchestrator | 2025-11-08 13:20:06.295942 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-08 13:20:06.295953 | orchestrator | Saturday 08 November 2025 13:20:04 +0000 (0:00:00.994) 0:00:24.717 ***** 2025-11-08 13:20:06.295964 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM87nelA5cS9mfX7T9YeXFfAU1BIjWyTizxJZrhE6ibxOcSVyaMGGcUf02QB0tTBR4C11dSRhbdCMnANEU2m8UE=) 2025-11-08 13:20:06.295976 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEIOZwVc/YYV4ZEaPRmZB4zjccjas9J3caqeix/FaQPjwhr0JtLm3hiZOSPfcntAVXTS6z7mhpNZhtNLMo5xFMeUULz3NFFO97K1cq5lwyg4Y1PVDmRoBqj7YgRRGyId3ef89gSPB/YvEwvbGwp+BJwqRGCnImL+7Zl5AeyDW6bj4wdjNv7nUOfGg6Z3Faib3mjEy8fxVat2RwECeZV2dn1hIMgdJBXppowkUZpH+vMIg4O9Dqtf2Oa1agyDuZUHGle5b8fhCidkr+mzo13Ew4rVEUYQsbZj1xfUwcjSpgL0qgZJ+dp3ugdReeWvGPe/zaJh3OuJhFnZDlNQcwBgL8XiGspm8WeVfqOY1FLY/UXGiRxswZ3sgqKvtXWJJ/0tKdJIUccXo/N+P9RDf7bWMSqqVYajVcBdtK3N51/ngP1msZvUfvJLkp7D3czb2YqzOOuOK0NS/0CaI96QGzDAZ9c5DFD0oYX8ngY4brodsdhfG/cpCUhVuPdX5xpOmM24c=) 2025-11-08 13:20:06.295987 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPExBrM1YgK1o6aKGeVzUVgBpbKBc+DU7gFyOcFR6AZv) 2025-11-08 13:20:06.295998 | orchestrator | 2025-11-08 13:20:06.296009 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-11-08 13:20:06.296020 | orchestrator | Saturday 08 November 2025 13:20:05 +0000 (0:00:01.035) 0:00:25.753 ***** 2025-11-08 13:20:06.296032 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-08 13:20:06.296043 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-11-08 13:20:06.296054 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-11-08 13:20:06.296065 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-11-08 13:20:06.296092 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-11-08 13:20:06.296103 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-11-08 13:20:06.296114 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-11-08 13:20:06.296125 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:20:06.296137 | orchestrator | 2025-11-08 13:20:06.296148 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-11-08 13:20:06.296159 | orchestrator | Saturday 08 November 2025 13:20:05 +0000 (0:00:00.160) 0:00:25.913 ***** 2025-11-08 13:20:06.296170 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:20:06.296181 | orchestrator | 2025-11-08 13:20:06.296195 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-11-08 13:20:06.296208 | orchestrator | Saturday 08 November 2025 13:20:05 +0000 (0:00:00.056) 0:00:25.969 ***** 2025-11-08 13:20:06.296238 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:20:06.296251 | orchestrator | 2025-11-08 13:20:06.296263 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-11-08 13:20:06.296275 | orchestrator | Saturday 08 November 2025 13:20:05 +0000 (0:00:00.064) 0:00:26.034 ***** 2025-11-08 13:20:06.296288 | orchestrator | changed: [testbed-manager] 2025-11-08 13:20:06.296300 | orchestrator | 2025-11-08 13:20:06.296313 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:20:06.296326 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-08 13:20:06.296340 | orchestrator | 2025-11-08 13:20:06.296352 | orchestrator | 2025-11-08 13:20:06.296364 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:20:06.296377 | orchestrator | Saturday 08 November 2025 13:20:06 +0000 (0:00:00.694) 0:00:26.728 ***** 2025-11-08 13:20:06.296390 | orchestrator | =============================================================================== 2025-11-08 13:20:06.296403 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.98s 2025-11-08 13:20:06.296416 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 4.94s 2025-11-08 13:20:06.296429 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-11-08 13:20:06.296441 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-11-08 13:20:06.296454 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-11-08 13:20:06.296466 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-11-08 13:20:06.296479 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-11-08 13:20:06.296491 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-11-08 13:20:06.296504 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-11-08 13:20:06.296516 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-11-08 13:20:06.296529 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-11-08 13:20:06.296542 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-11-08 13:20:06.296560 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-11-08 13:20:06.296571 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-11-08 13:20:06.296582 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-11-08 13:20:06.296593 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-11-08 13:20:06.296604 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.69s 2025-11-08 13:20:06.296615 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-11-08 13:20:06.296625 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2025-11-08 13:20:06.296637 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.14s 2025-11-08 13:20:06.551754 | orchestrator | + osism apply squid 2025-11-08 13:20:18.449393 | orchestrator | 2025-11-08 13:20:18 | INFO  | Task f47047cc-0706-49eb-be71-91e1fb88ffd8 (squid) was prepared for execution. 2025-11-08 13:20:18.449517 | orchestrator | 2025-11-08 13:20:18 | INFO  | It takes a moment until task f47047cc-0706-49eb-be71-91e1fb88ffd8 (squid) has been started and output is visible here. 2025-11-08 13:22:21.072732 | orchestrator | 2025-11-08 13:22:21.072847 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-11-08 13:22:21.072895 | orchestrator | 2025-11-08 13:22:21.072908 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-11-08 13:22:21.072921 | orchestrator | Saturday 08 November 2025 13:20:22 +0000 (0:00:00.174) 0:00:00.174 ***** 2025-11-08 13:22:21.072961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-11-08 13:22:21.072973 | orchestrator | 2025-11-08 13:22:21.072986 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-11-08 13:22:21.072997 | orchestrator | Saturday 08 November 2025 13:20:22 +0000 (0:00:00.080) 0:00:00.254 ***** 2025-11-08 13:22:21.073008 | orchestrator | ok: [testbed-manager] 2025-11-08 13:22:21.073020 | orchestrator | 2025-11-08 13:22:21.073032 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-11-08 13:22:21.073043 | orchestrator | Saturday 08 November 2025 13:20:25 +0000 (0:00:02.428) 0:00:02.683 ***** 2025-11-08 13:22:21.073054 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-11-08 13:22:21.073065 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-11-08 13:22:21.073077 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-11-08 13:22:21.073088 | orchestrator | 2025-11-08 13:22:21.073099 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-11-08 13:22:21.073110 | orchestrator | Saturday 08 November 2025 13:20:26 +0000 (0:00:01.161) 0:00:03.844 ***** 2025-11-08 13:22:21.073121 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-11-08 13:22:21.073132 | orchestrator | 2025-11-08 13:22:21.073143 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-11-08 13:22:21.073153 | orchestrator | Saturday 08 November 2025 13:20:27 +0000 (0:00:01.124) 0:00:04.969 ***** 2025-11-08 13:22:21.073164 | orchestrator | ok: [testbed-manager] 2025-11-08 13:22:21.073175 | orchestrator | 2025-11-08 13:22:21.073186 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-11-08 13:22:21.073197 | orchestrator | Saturday 08 November 2025 13:20:27 +0000 (0:00:00.394) 0:00:05.363 ***** 2025-11-08 13:22:21.073208 | orchestrator | changed: [testbed-manager] 2025-11-08 13:22:21.073219 | orchestrator | 2025-11-08 13:22:21.073230 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-11-08 13:22:21.073241 | orchestrator | Saturday 08 November 2025 13:20:28 +0000 (0:00:00.894) 0:00:06.257 ***** 2025-11-08 13:22:21.073252 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-11-08 13:22:21.073264 | orchestrator | ok: [testbed-manager] 2025-11-08 13:22:21.073276 | orchestrator | 2025-11-08 13:22:21.073288 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-11-08 13:22:21.073301 | orchestrator | Saturday 08 November 2025 13:21:07 +0000 (0:00:38.925) 0:00:45.183 ***** 2025-11-08 13:22:21.073313 | orchestrator | changed: [testbed-manager] 2025-11-08 13:22:21.073326 | orchestrator | 2025-11-08 13:22:21.073338 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-11-08 13:22:21.073351 | orchestrator | Saturday 08 November 2025 13:21:19 +0000 (0:00:12.234) 0:00:57.418 ***** 2025-11-08 13:22:21.073364 | orchestrator | Pausing for 60 seconds 2025-11-08 13:22:21.073376 | orchestrator | changed: [testbed-manager] 2025-11-08 13:22:21.073389 | orchestrator | 2025-11-08 13:22:21.073402 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-11-08 13:22:21.073415 | orchestrator | Saturday 08 November 2025 13:22:19 +0000 (0:01:00.089) 0:01:57.507 ***** 2025-11-08 13:22:21.073427 | orchestrator | ok: [testbed-manager] 2025-11-08 13:22:21.073440 | orchestrator | 2025-11-08 13:22:21.073452 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-11-08 13:22:21.073465 | orchestrator | Saturday 08 November 2025 13:22:19 +0000 (0:00:00.075) 0:01:57.582 ***** 2025-11-08 13:22:21.073478 | orchestrator | changed: [testbed-manager] 2025-11-08 13:22:21.073490 | orchestrator | 2025-11-08 13:22:21.073503 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:22:21.073516 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:22:21.073538 | orchestrator | 2025-11-08 13:22:21.073550 | orchestrator | 2025-11-08 13:22:21.073562 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:22:21.073575 | orchestrator | Saturday 08 November 2025 13:22:20 +0000 (0:00:00.719) 0:01:58.302 ***** 2025-11-08 13:22:21.073588 | orchestrator | =============================================================================== 2025-11-08 13:22:21.073600 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-11-08 13:22:21.073612 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 38.93s 2025-11-08 13:22:21.073625 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.23s 2025-11-08 13:22:21.073637 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.43s 2025-11-08 13:22:21.073648 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.16s 2025-11-08 13:22:21.073659 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2025-11-08 13:22:21.073670 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.89s 2025-11-08 13:22:21.073681 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.72s 2025-11-08 13:22:21.073692 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-11-08 13:22:21.073703 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-11-08 13:22:21.073714 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-11-08 13:22:21.431426 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-08 13:22:21.432405 | orchestrator | ++ semver latest 9.0.0 2025-11-08 13:22:21.492060 | orchestrator | + [[ -1 -lt 0 ]] 2025-11-08 13:22:21.492087 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-08 13:22:21.492912 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-11-08 13:22:33.530637 | orchestrator | 2025-11-08 13:22:33 | INFO  | Task ed8b0fb6-1774-498e-91ce-ec8e24c54fbd (operator) was prepared for execution. 2025-11-08 13:22:33.530684 | orchestrator | 2025-11-08 13:22:33 | INFO  | It takes a moment until task ed8b0fb6-1774-498e-91ce-ec8e24c54fbd (operator) has been started and output is visible here. 2025-11-08 13:22:49.913816 | orchestrator | 2025-11-08 13:22:49.913968 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-11-08 13:22:49.913987 | orchestrator | 2025-11-08 13:22:49.913999 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-08 13:22:49.914011 | orchestrator | Saturday 08 November 2025 13:22:37 +0000 (0:00:00.154) 0:00:00.154 ***** 2025-11-08 13:22:49.914080 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:22:49.914094 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:22:49.914105 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:22:49.914116 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:22:49.914127 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:22:49.914138 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:22:49.914150 | orchestrator | 2025-11-08 13:22:49.914161 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-11-08 13:22:49.914172 | orchestrator | Saturday 08 November 2025 13:22:41 +0000 (0:00:03.142) 0:00:03.296 ***** 2025-11-08 13:22:49.914183 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:22:49.914194 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:22:49.914206 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:22:49.914216 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:22:49.914227 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:22:49.914238 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:22:49.914252 | orchestrator | 2025-11-08 13:22:49.914263 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-11-08 13:22:49.914274 | orchestrator | 2025-11-08 13:22:49.914285 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-11-08 13:22:49.914297 | orchestrator | Saturday 08 November 2025 13:22:41 +0000 (0:00:00.829) 0:00:04.126 ***** 2025-11-08 13:22:49.914307 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:22:49.914343 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:22:49.914354 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:22:49.914365 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:22:49.914376 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:22:49.914387 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:22:49.914398 | orchestrator | 2025-11-08 13:22:49.914409 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-11-08 13:22:49.914420 | orchestrator | Saturday 08 November 2025 13:22:42 +0000 (0:00:00.221) 0:00:04.347 ***** 2025-11-08 13:22:49.914431 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:22:49.914442 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:22:49.914452 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:22:49.914463 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:22:49.914474 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:22:49.914485 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:22:49.914496 | orchestrator | 2025-11-08 13:22:49.914507 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-11-08 13:22:49.914518 | orchestrator | Saturday 08 November 2025 13:22:42 +0000 (0:00:00.214) 0:00:04.562 ***** 2025-11-08 13:22:49.914529 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:22:49.914556 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:22:49.914568 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:22:49.914579 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:22:49.914590 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:22:49.914601 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:22:49.914612 | orchestrator | 2025-11-08 13:22:49.914623 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-11-08 13:22:49.914634 | orchestrator | Saturday 08 November 2025 13:22:42 +0000 (0:00:00.645) 0:00:05.208 ***** 2025-11-08 13:22:49.914645 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:22:49.914656 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:22:49.914667 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:22:49.914678 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:22:49.914689 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:22:49.914700 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:22:49.914711 | orchestrator | 2025-11-08 13:22:49.914722 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-11-08 13:22:49.914733 | orchestrator | Saturday 08 November 2025 13:22:43 +0000 (0:00:00.860) 0:00:06.068 ***** 2025-11-08 13:22:49.914744 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-11-08 13:22:49.914754 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-11-08 13:22:49.914770 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-11-08 13:22:49.914782 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-11-08 13:22:49.914793 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-11-08 13:22:49.914803 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-11-08 13:22:49.914814 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-11-08 13:22:49.914825 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-11-08 13:22:49.914836 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-11-08 13:22:49.914847 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-11-08 13:22:49.914858 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-11-08 13:22:49.914902 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-11-08 13:22:49.914914 | orchestrator | 2025-11-08 13:22:49.914925 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-11-08 13:22:49.914936 | orchestrator | Saturday 08 November 2025 13:22:45 +0000 (0:00:01.188) 0:00:07.257 ***** 2025-11-08 13:22:49.914946 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:22:49.914957 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:22:49.914967 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:22:49.914978 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:22:49.914988 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:22:49.914999 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:22:49.915018 | orchestrator | 2025-11-08 13:22:49.915029 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-11-08 13:22:49.915041 | orchestrator | Saturday 08 November 2025 13:22:46 +0000 (0:00:01.360) 0:00:08.618 ***** 2025-11-08 13:22:49.915052 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-11-08 13:22:49.915063 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-11-08 13:22:49.915074 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-11-08 13:22:49.915084 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-11-08 13:22:49.915115 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-11-08 13:22:49.915126 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-11-08 13:22:49.915137 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-11-08 13:22:49.915148 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-11-08 13:22:49.915158 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-11-08 13:22:49.915169 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-11-08 13:22:49.915179 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-11-08 13:22:49.915190 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-11-08 13:22:49.915201 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-11-08 13:22:49.915211 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-11-08 13:22:49.915222 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-11-08 13:22:49.915232 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-11-08 13:22:49.915243 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-11-08 13:22:49.915254 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-11-08 13:22:49.915264 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-11-08 13:22:49.915275 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-11-08 13:22:49.915285 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-11-08 13:22:49.915296 | orchestrator | 2025-11-08 13:22:49.915307 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-11-08 13:22:49.915318 | orchestrator | Saturday 08 November 2025 13:22:47 +0000 (0:00:01.269) 0:00:09.887 ***** 2025-11-08 13:22:49.915329 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:22:49.915339 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:22:49.915350 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:22:49.915361 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:22:49.915371 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:22:49.915382 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:22:49.915392 | orchestrator | 2025-11-08 13:22:49.915403 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2025-11-08 13:22:49.915414 | orchestrator | Saturday 08 November 2025 13:22:47 +0000 (0:00:00.168) 0:00:10.055 ***** 2025-11-08 13:22:49.915424 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:22:49.915435 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:22:49.915446 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:22:49.915456 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:22:49.915467 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:22:49.915477 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:22:49.915488 | orchestrator | 2025-11-08 13:22:49.915499 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-11-08 13:22:49.915510 | orchestrator | Saturday 08 November 2025 13:22:48 +0000 (0:00:00.188) 0:00:10.244 ***** 2025-11-08 13:22:49.915520 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:22:49.915531 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:22:49.915549 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:22:49.915560 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:22:49.915570 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:22:49.915581 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:22:49.915592 | orchestrator | 2025-11-08 13:22:49.915603 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-11-08 13:22:49.915613 | orchestrator | Saturday 08 November 2025 13:22:48 +0000 (0:00:00.593) 0:00:10.838 ***** 2025-11-08 13:22:49.915624 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:22:49.915635 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:22:49.915646 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:22:49.915656 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:22:49.915667 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:22:49.915677 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:22:49.915688 | orchestrator | 2025-11-08 13:22:49.915698 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-11-08 13:22:49.915709 | orchestrator | Saturday 08 November 2025 13:22:48 +0000 (0:00:00.198) 0:00:11.037 ***** 2025-11-08 13:22:49.915720 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-08 13:22:49.915731 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:22:49.915742 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-08 13:22:49.915752 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:22:49.915763 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-08 13:22:49.915774 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-11-08 13:22:49.915784 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:22:49.915795 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:22:49.915806 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-08 13:22:49.915816 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-08 13:22:49.915827 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:22:49.915838 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:22:49.915848 | orchestrator | 2025-11-08 13:22:49.915859 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-11-08 13:22:49.915922 | orchestrator | Saturday 08 November 2025 13:22:49 +0000 (0:00:00.738) 0:00:11.775 ***** 2025-11-08 13:22:49.915934 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:22:49.915945 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:22:49.915956 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:22:49.915966 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:22:49.915977 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:22:49.915988 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:22:49.915998 | orchestrator | 2025-11-08 13:22:49.916009 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-11-08 13:22:49.916020 | orchestrator | Saturday 08 November 2025 13:22:49 +0000 (0:00:00.198) 0:00:11.974 ***** 2025-11-08 13:22:49.916031 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:22:49.916042 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:22:49.916053 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:22:49.916064 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:22:49.916083 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:22:51.328089 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:22:51.328184 | orchestrator | 2025-11-08 13:22:51.328199 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-11-08 13:22:51.328212 | orchestrator | Saturday 08 November 2025 13:22:49 +0000 (0:00:00.156) 0:00:12.130 ***** 2025-11-08 13:22:51.328223 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:22:51.328234 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:22:51.328245 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:22:51.328256 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:22:51.328267 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:22:51.328278 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:22:51.328290 | orchestrator | 2025-11-08 13:22:51.328301 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-11-08 13:22:51.328337 | orchestrator | Saturday 08 November 2025 13:22:50 +0000 (0:00:00.166) 0:00:12.297 ***** 2025-11-08 13:22:51.328349 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:22:51.328360 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:22:51.328370 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:22:51.328381 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:22:51.328392 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:22:51.328402 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:22:51.328413 | orchestrator | 2025-11-08 13:22:51.328424 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-11-08 13:22:51.328435 | orchestrator | Saturday 08 November 2025 13:22:50 +0000 (0:00:00.686) 0:00:12.983 ***** 2025-11-08 13:22:51.328446 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:22:51.328457 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:22:51.328467 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:22:51.328478 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:22:51.328488 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:22:51.328499 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:22:51.328510 | orchestrator | 2025-11-08 13:22:51.328521 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:22:51.328533 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 13:22:51.328545 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 13:22:51.328573 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 13:22:51.328584 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 13:22:51.328596 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 13:22:51.328606 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 13:22:51.328617 | orchestrator | 2025-11-08 13:22:51.328628 | orchestrator | 2025-11-08 13:22:51.328640 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:22:51.328652 | orchestrator | Saturday 08 November 2025 13:22:51 +0000 (0:00:00.277) 0:00:13.261 ***** 2025-11-08 13:22:51.328665 | orchestrator | =============================================================================== 2025-11-08 13:22:51.328677 | orchestrator | Gathering Facts --------------------------------------------------------- 3.14s 2025-11-08 13:22:51.328695 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.36s 2025-11-08 13:22:51.328707 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2025-11-08 13:22:51.328720 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2025-11-08 13:22:51.328733 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2025-11-08 13:22:51.328745 | orchestrator | Do not require tty for all users ---------------------------------------- 0.83s 2025-11-08 13:22:51.328758 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2025-11-08 13:22:51.328770 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2025-11-08 13:22:51.328782 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2025-11-08 13:22:51.328794 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-11-08 13:22:51.328807 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.28s 2025-11-08 13:22:51.328827 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.22s 2025-11-08 13:22:51.328839 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.21s 2025-11-08 13:22:51.328851 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-11-08 13:22:51.328864 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.20s 2025-11-08 13:22:51.328900 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2025-11-08 13:22:51.328912 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2025-11-08 13:22:51.328925 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-11-08 13:22:51.328936 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-11-08 13:22:51.676242 | orchestrator | + osism apply --environment custom facts 2025-11-08 13:22:53.767247 | orchestrator | 2025-11-08 13:22:53 | INFO  | Trying to run play facts in environment custom 2025-11-08 13:23:03.869577 | orchestrator | 2025-11-08 13:23:03 | INFO  | Task 0d3f781b-b858-44a2-87d9-33bae65cf948 (facts) was prepared for execution. 2025-11-08 13:23:03.869693 | orchestrator | 2025-11-08 13:23:03 | INFO  | It takes a moment until task 0d3f781b-b858-44a2-87d9-33bae65cf948 (facts) has been started and output is visible here. 2025-11-08 13:23:49.374144 | orchestrator | 2025-11-08 13:23:49.374283 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-11-08 13:23:49.374301 | orchestrator | 2025-11-08 13:23:49.374315 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-08 13:23:49.374328 | orchestrator | Saturday 08 November 2025 13:23:08 +0000 (0:00:00.104) 0:00:00.104 ***** 2025-11-08 13:23:49.374340 | orchestrator | ok: [testbed-manager] 2025-11-08 13:23:49.374354 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:23:49.374367 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:23:49.374380 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:23:49.374392 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:23:49.374403 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:23:49.374414 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:23:49.374451 | orchestrator | 2025-11-08 13:23:49.374463 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-11-08 13:23:49.374474 | orchestrator | Saturday 08 November 2025 13:23:09 +0000 (0:00:01.624) 0:00:01.729 ***** 2025-11-08 13:23:49.374485 | orchestrator | ok: [testbed-manager] 2025-11-08 13:23:49.374496 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:23:49.374508 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:23:49.374519 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:23:49.374530 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:23:49.374541 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:23:49.374552 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:23:49.374563 | orchestrator | 2025-11-08 13:23:49.374574 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-11-08 13:23:49.374585 | orchestrator | 2025-11-08 13:23:49.374596 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-08 13:23:49.374607 | orchestrator | Saturday 08 November 2025 13:23:10 +0000 (0:00:01.235) 0:00:02.964 ***** 2025-11-08 13:23:49.374619 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:23:49.374630 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:23:49.374641 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:23:49.374653 | orchestrator | 2025-11-08 13:23:49.374664 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-08 13:23:49.374676 | orchestrator | Saturday 08 November 2025 13:23:11 +0000 (0:00:00.128) 0:00:03.093 ***** 2025-11-08 13:23:49.374687 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:23:49.374698 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:23:49.374709 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:23:49.374720 | orchestrator | 2025-11-08 13:23:49.374732 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-08 13:23:49.374792 | orchestrator | Saturday 08 November 2025 13:23:11 +0000 (0:00:00.245) 0:00:03.339 ***** 2025-11-08 13:23:49.374805 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:23:49.374816 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:23:49.374827 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:23:49.374838 | orchestrator | 2025-11-08 13:23:49.374849 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-08 13:23:49.374862 | orchestrator | Saturday 08 November 2025 13:23:11 +0000 (0:00:00.213) 0:00:03.552 ***** 2025-11-08 13:23:49.374905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:23:49.374917 | orchestrator | 2025-11-08 13:23:49.374929 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-08 13:23:49.374955 | orchestrator | Saturday 08 November 2025 13:23:11 +0000 (0:00:00.133) 0:00:03.685 ***** 2025-11-08 13:23:49.374967 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:23:49.374978 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:23:49.374989 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:23:49.374999 | orchestrator | 2025-11-08 13:23:49.375010 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-08 13:23:49.375021 | orchestrator | Saturday 08 November 2025 13:23:12 +0000 (0:00:00.449) 0:00:04.135 ***** 2025-11-08 13:23:49.375033 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:23:49.375044 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:23:49.375054 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:23:49.375065 | orchestrator | 2025-11-08 13:23:49.375077 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-08 13:23:49.375088 | orchestrator | Saturday 08 November 2025 13:23:12 +0000 (0:00:00.146) 0:00:04.281 ***** 2025-11-08 13:23:49.375099 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:23:49.375110 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:23:49.375121 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:23:49.375132 | orchestrator | 2025-11-08 13:23:49.375143 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-08 13:23:49.375154 | orchestrator | Saturday 08 November 2025 13:23:13 +0000 (0:00:01.027) 0:00:05.309 ***** 2025-11-08 13:23:49.375165 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:23:49.375176 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:23:49.375187 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:23:49.375198 | orchestrator | 2025-11-08 13:23:49.375209 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-08 13:23:49.375220 | orchestrator | Saturday 08 November 2025 13:23:13 +0000 (0:00:00.459) 0:00:05.769 ***** 2025-11-08 13:23:49.375231 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:23:49.375242 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:23:49.375253 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:23:49.375264 | orchestrator | 2025-11-08 13:23:49.375275 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-08 13:23:49.375286 | orchestrator | Saturday 08 November 2025 13:23:14 +0000 (0:00:01.060) 0:00:06.829 ***** 2025-11-08 13:23:49.375297 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:23:49.375308 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:23:49.375319 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:23:49.375330 | orchestrator | 2025-11-08 13:23:49.375341 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-11-08 13:23:49.375352 | orchestrator | Saturday 08 November 2025 13:23:33 +0000 (0:00:18.703) 0:00:25.532 ***** 2025-11-08 13:23:49.375362 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:23:49.375374 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:23:49.375385 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:23:49.375396 | orchestrator | 2025-11-08 13:23:49.375407 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-11-08 13:23:49.375444 | orchestrator | Saturday 08 November 2025 13:23:33 +0000 (0:00:00.117) 0:00:25.650 ***** 2025-11-08 13:23:49.375456 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:23:49.375467 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:23:49.375479 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:23:49.375489 | orchestrator | 2025-11-08 13:23:49.375500 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-08 13:23:49.375511 | orchestrator | Saturday 08 November 2025 13:23:40 +0000 (0:00:06.943) 0:00:32.593 ***** 2025-11-08 13:23:49.375522 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:23:49.375534 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:23:49.375545 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:23:49.375556 | orchestrator | 2025-11-08 13:23:49.375567 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-11-08 13:23:49.375578 | orchestrator | Saturday 08 November 2025 13:23:41 +0000 (0:00:00.455) 0:00:33.049 ***** 2025-11-08 13:23:49.375589 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-11-08 13:23:49.375600 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-11-08 13:23:49.375611 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-11-08 13:23:49.375622 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-11-08 13:23:49.375633 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-11-08 13:23:49.375643 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-11-08 13:23:49.375654 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-11-08 13:23:49.375665 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-11-08 13:23:49.375676 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-11-08 13:23:49.375687 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-11-08 13:23:49.375697 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-11-08 13:23:49.375708 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-11-08 13:23:49.375719 | orchestrator | 2025-11-08 13:23:49.375730 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-08 13:23:49.375741 | orchestrator | Saturday 08 November 2025 13:23:44 +0000 (0:00:03.407) 0:00:36.456 ***** 2025-11-08 13:23:49.375752 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:23:49.375763 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:23:49.375774 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:23:49.375784 | orchestrator | 2025-11-08 13:23:49.375795 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-08 13:23:49.375806 | orchestrator | 2025-11-08 13:23:49.375817 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-08 13:23:49.375828 | orchestrator | Saturday 08 November 2025 13:23:45 +0000 (0:00:01.293) 0:00:37.750 ***** 2025-11-08 13:23:49.375839 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:23:49.375850 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:23:49.375861 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:23:49.375970 | orchestrator | ok: [testbed-manager] 2025-11-08 13:23:49.375984 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:23:49.375995 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:23:49.376006 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:23:49.376016 | orchestrator | 2025-11-08 13:23:49.376028 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:23:49.376082 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:23:49.376097 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:23:49.376110 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:23:49.376130 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:23:49.376141 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:23:49.376153 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:23:49.376164 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:23:49.376175 | orchestrator | 2025-11-08 13:23:49.376187 | orchestrator | 2025-11-08 13:23:49.376198 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:23:49.376209 | orchestrator | Saturday 08 November 2025 13:23:49 +0000 (0:00:03.600) 0:00:41.350 ***** 2025-11-08 13:23:49.376220 | orchestrator | =============================================================================== 2025-11-08 13:23:49.376232 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.70s 2025-11-08 13:23:49.376243 | orchestrator | Install required packages (Debian) -------------------------------------- 6.94s 2025-11-08 13:23:49.376254 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.60s 2025-11-08 13:23:49.376265 | orchestrator | Copy fact files --------------------------------------------------------- 3.41s 2025-11-08 13:23:49.376276 | orchestrator | Create custom facts directory ------------------------------------------- 1.62s 2025-11-08 13:23:49.376287 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.29s 2025-11-08 13:23:49.376307 | orchestrator | Copy fact file ---------------------------------------------------------- 1.24s 2025-11-08 13:23:49.578794 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2025-11-08 13:23:49.578910 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-11-08 13:23:49.578924 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-11-08 13:23:49.578933 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2025-11-08 13:23:49.578942 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-11-08 13:23:49.578951 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.25s 2025-11-08 13:23:49.578960 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-11-08 13:23:49.578969 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2025-11-08 13:23:49.578978 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-11-08 13:23:49.578987 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-11-08 13:23:49.578996 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-11-08 13:23:49.877447 | orchestrator | + osism apply bootstrap 2025-11-08 13:24:01.897896 | orchestrator | 2025-11-08 13:24:01 | INFO  | Task e6b8555e-2621-4e17-ac84-1b29030ae3fa (bootstrap) was prepared for execution. 2025-11-08 13:24:01.898051 | orchestrator | 2025-11-08 13:24:01 | INFO  | It takes a moment until task e6b8555e-2621-4e17-ac84-1b29030ae3fa (bootstrap) has been started and output is visible here. 2025-11-08 13:24:17.272070 | orchestrator | 2025-11-08 13:24:17.272168 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-11-08 13:24:17.272184 | orchestrator | 2025-11-08 13:24:17.272197 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-11-08 13:24:17.272209 | orchestrator | Saturday 08 November 2025 13:24:05 +0000 (0:00:00.118) 0:00:00.118 ***** 2025-11-08 13:24:17.272220 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:17.272232 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:17.272243 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:17.272276 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:17.272287 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:17.272298 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:17.272310 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:17.272321 | orchestrator | 2025-11-08 13:24:17.272332 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-08 13:24:17.272343 | orchestrator | 2025-11-08 13:24:17.272354 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-08 13:24:17.272365 | orchestrator | Saturday 08 November 2025 13:24:06 +0000 (0:00:00.207) 0:00:00.325 ***** 2025-11-08 13:24:17.272376 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:17.272387 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:17.272398 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:17.272408 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:17.272419 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:17.272430 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:17.272454 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:17.272466 | orchestrator | 2025-11-08 13:24:17.272477 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-11-08 13:24:17.272487 | orchestrator | 2025-11-08 13:24:17.272499 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-08 13:24:17.272509 | orchestrator | Saturday 08 November 2025 13:24:09 +0000 (0:00:03.588) 0:00:03.914 ***** 2025-11-08 13:24:17.272522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-08 13:24:17.272534 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-11-08 13:24:17.272544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-08 13:24:17.272555 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-08 13:24:17.272566 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-11-08 13:24:17.272577 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-11-08 13:24:17.272587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-11-08 13:24:17.272598 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-11-08 13:24:17.272610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-11-08 13:24:17.272622 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-11-08 13:24:17.272634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-11-08 13:24:17.272646 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-11-08 13:24:17.272659 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-11-08 13:24:17.272670 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-11-08 13:24:17.272682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-08 13:24:17.272694 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-11-08 13:24:17.272706 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-11-08 13:24:17.272718 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-08 13:24:17.272730 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-11-08 13:24:17.272742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-08 13:24:17.272754 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-08 13:24:17.272766 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-11-08 13:24:17.272779 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:24:17.272791 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:24:17.272803 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-08 13:24:17.272815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-08 13:24:17.272827 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-08 13:24:17.272839 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-11-08 13:24:17.272851 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-08 13:24:17.272898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:24:17.272912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-11-08 13:24:17.272924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:24:17.272937 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-08 13:24:17.272949 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-11-08 13:24:17.272961 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-11-08 13:24:17.272972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:24:17.272983 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-11-08 13:24:17.272994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-11-08 13:24:17.273004 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-11-08 13:24:17.273015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-11-08 13:24:17.273026 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:24:17.273037 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-08 13:24:17.273048 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-11-08 13:24:17.273059 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-11-08 13:24:17.273070 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:24:17.273081 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-11-08 13:24:17.273092 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:24:17.273119 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-11-08 13:24:17.273131 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-11-08 13:24:17.273142 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:24:17.273153 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-11-08 13:24:17.273163 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-11-08 13:24:17.273174 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-11-08 13:24:17.273185 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-11-08 13:24:17.273196 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-11-08 13:24:17.273207 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:24:17.273218 | orchestrator | 2025-11-08 13:24:17.273229 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-11-08 13:24:17.273240 | orchestrator | 2025-11-08 13:24:17.273251 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-11-08 13:24:17.273262 | orchestrator | Saturday 08 November 2025 13:24:10 +0000 (0:00:00.463) 0:00:04.377 ***** 2025-11-08 13:24:17.273273 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:17.273283 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:17.273294 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:17.273305 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:17.273316 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:17.273327 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:17.273338 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:17.273348 | orchestrator | 2025-11-08 13:24:17.273359 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-11-08 13:24:17.273370 | orchestrator | Saturday 08 November 2025 13:24:11 +0000 (0:00:01.213) 0:00:05.591 ***** 2025-11-08 13:24:17.273381 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:17.273392 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:17.273403 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:17.273413 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:17.273424 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:17.273435 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:17.273446 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:17.273457 | orchestrator | 2025-11-08 13:24:17.273467 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-11-08 13:24:17.273478 | orchestrator | Saturday 08 November 2025 13:24:12 +0000 (0:00:01.215) 0:00:06.806 ***** 2025-11-08 13:24:17.273497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:24:17.273511 | orchestrator | 2025-11-08 13:24:17.273522 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-11-08 13:24:17.273533 | orchestrator | Saturday 08 November 2025 13:24:12 +0000 (0:00:00.308) 0:00:07.115 ***** 2025-11-08 13:24:17.273544 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:24:17.273555 | orchestrator | changed: [testbed-manager] 2025-11-08 13:24:17.273566 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:24:17.273577 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:24:17.273588 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:24:17.273599 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:24:17.273610 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:24:17.273621 | orchestrator | 2025-11-08 13:24:17.273632 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-11-08 13:24:17.273643 | orchestrator | Saturday 08 November 2025 13:24:14 +0000 (0:00:01.957) 0:00:09.073 ***** 2025-11-08 13:24:17.273653 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:24:17.273665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:24:17.273677 | orchestrator | 2025-11-08 13:24:17.273688 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-11-08 13:24:17.273699 | orchestrator | Saturday 08 November 2025 13:24:15 +0000 (0:00:00.253) 0:00:09.326 ***** 2025-11-08 13:24:17.273710 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:24:17.273721 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:24:17.273732 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:24:17.273743 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:24:17.273753 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:24:17.273764 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:24:17.273775 | orchestrator | 2025-11-08 13:24:17.273786 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-11-08 13:24:17.273797 | orchestrator | Saturday 08 November 2025 13:24:16 +0000 (0:00:00.975) 0:00:10.301 ***** 2025-11-08 13:24:17.273808 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:24:17.273819 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:24:17.273829 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:24:17.275275 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:24:17.275311 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:24:17.275322 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:24:17.275332 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:24:17.275342 | orchestrator | 2025-11-08 13:24:17.275353 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-11-08 13:24:17.275363 | orchestrator | Saturday 08 November 2025 13:24:16 +0000 (0:00:00.513) 0:00:10.815 ***** 2025-11-08 13:24:17.275373 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:24:17.275383 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:24:17.275393 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:24:17.275403 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:24:17.275413 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:24:17.275423 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:24:17.275432 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:17.275442 | orchestrator | 2025-11-08 13:24:17.275452 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-11-08 13:24:17.275463 | orchestrator | Saturday 08 November 2025 13:24:17 +0000 (0:00:00.571) 0:00:11.387 ***** 2025-11-08 13:24:17.275473 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:24:17.275483 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:24:17.275517 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:24:28.170764 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:24:28.170925 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:24:28.170943 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:24:28.170955 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:24:28.170967 | orchestrator | 2025-11-08 13:24:28.170980 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-11-08 13:24:28.170993 | orchestrator | Saturday 08 November 2025 13:24:17 +0000 (0:00:00.223) 0:00:11.610 ***** 2025-11-08 13:24:28.171005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:24:28.171035 | orchestrator | 2025-11-08 13:24:28.171047 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-11-08 13:24:28.171058 | orchestrator | Saturday 08 November 2025 13:24:17 +0000 (0:00:00.263) 0:00:11.874 ***** 2025-11-08 13:24:28.171119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:24:28.171132 | orchestrator | 2025-11-08 13:24:28.171144 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-11-08 13:24:28.171155 | orchestrator | Saturday 08 November 2025 13:24:17 +0000 (0:00:00.289) 0:00:12.164 ***** 2025-11-08 13:24:28.171166 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:28.171177 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:28.171188 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.171199 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:28.171210 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:28.171220 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:28.171231 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:28.171242 | orchestrator | 2025-11-08 13:24:28.171252 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-11-08 13:24:28.171263 | orchestrator | Saturday 08 November 2025 13:24:19 +0000 (0:00:01.251) 0:00:13.416 ***** 2025-11-08 13:24:28.171274 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:24:28.171287 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:24:28.171299 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:24:28.171311 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:24:28.171323 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:24:28.171335 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:24:28.171347 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:24:28.171359 | orchestrator | 2025-11-08 13:24:28.171372 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-11-08 13:24:28.171384 | orchestrator | Saturday 08 November 2025 13:24:19 +0000 (0:00:00.201) 0:00:13.618 ***** 2025-11-08 13:24:28.171396 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:28.171409 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:28.171422 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:28.171433 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:28.171445 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:28.171457 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:28.171469 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.171481 | orchestrator | 2025-11-08 13:24:28.171493 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-11-08 13:24:28.171505 | orchestrator | Saturday 08 November 2025 13:24:19 +0000 (0:00:00.502) 0:00:14.120 ***** 2025-11-08 13:24:28.171518 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:24:28.171530 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:24:28.171542 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:24:28.171553 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:24:28.171565 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:24:28.171577 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:24:28.171613 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:24:28.171626 | orchestrator | 2025-11-08 13:24:28.171639 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-11-08 13:24:28.171652 | orchestrator | Saturday 08 November 2025 13:24:20 +0000 (0:00:00.234) 0:00:14.355 ***** 2025-11-08 13:24:28.171663 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:24:28.171673 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:24:28.171684 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:24:28.171695 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:24:28.171705 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:24:28.171716 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.171726 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:24:28.171737 | orchestrator | 2025-11-08 13:24:28.171748 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-11-08 13:24:28.171759 | orchestrator | Saturday 08 November 2025 13:24:20 +0000 (0:00:00.498) 0:00:14.853 ***** 2025-11-08 13:24:28.171770 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.171780 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:24:28.171791 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:24:28.171802 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:24:28.171812 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:24:28.171823 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:24:28.171833 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:24:28.171844 | orchestrator | 2025-11-08 13:24:28.171855 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-11-08 13:24:28.171865 | orchestrator | Saturday 08 November 2025 13:24:21 +0000 (0:00:00.973) 0:00:15.827 ***** 2025-11-08 13:24:28.171896 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.171908 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:28.171918 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:28.171929 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:28.171940 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:28.171950 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:28.171961 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:28.171971 | orchestrator | 2025-11-08 13:24:28.171982 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-11-08 13:24:28.171993 | orchestrator | Saturday 08 November 2025 13:24:22 +0000 (0:00:00.987) 0:00:16.814 ***** 2025-11-08 13:24:28.172022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:24:28.172034 | orchestrator | 2025-11-08 13:24:28.172045 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-11-08 13:24:28.172056 | orchestrator | Saturday 08 November 2025 13:24:22 +0000 (0:00:00.290) 0:00:17.105 ***** 2025-11-08 13:24:28.172067 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:24:28.172078 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:24:28.172088 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:24:28.172099 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:24:28.172110 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:24:28.172120 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:24:28.172131 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:24:28.172141 | orchestrator | 2025-11-08 13:24:28.172152 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-08 13:24:28.172163 | orchestrator | Saturday 08 November 2025 13:24:24 +0000 (0:00:01.160) 0:00:18.265 ***** 2025-11-08 13:24:28.172174 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:28.172184 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:28.172195 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:28.172211 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:28.172222 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:28.172233 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:28.172252 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.172263 | orchestrator | 2025-11-08 13:24:28.172274 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-08 13:24:28.172284 | orchestrator | Saturday 08 November 2025 13:24:24 +0000 (0:00:00.221) 0:00:18.486 ***** 2025-11-08 13:24:28.172295 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:28.172306 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:28.172316 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:28.172327 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:28.172337 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:28.172348 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:28.172359 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.172369 | orchestrator | 2025-11-08 13:24:28.172380 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-08 13:24:28.172391 | orchestrator | Saturday 08 November 2025 13:24:24 +0000 (0:00:00.230) 0:00:18.717 ***** 2025-11-08 13:24:28.172401 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:28.172412 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:28.172423 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:28.172433 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:28.172444 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:28.172454 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:28.172465 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.172476 | orchestrator | 2025-11-08 13:24:28.172486 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-08 13:24:28.172497 | orchestrator | Saturday 08 November 2025 13:24:24 +0000 (0:00:00.207) 0:00:18.924 ***** 2025-11-08 13:24:28.172509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:24:28.172521 | orchestrator | 2025-11-08 13:24:28.172532 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-08 13:24:28.172543 | orchestrator | Saturday 08 November 2025 13:24:24 +0000 (0:00:00.316) 0:00:19.241 ***** 2025-11-08 13:24:28.172554 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:28.172564 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:28.172575 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:28.172586 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:28.172596 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:28.172607 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:28.172617 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.172628 | orchestrator | 2025-11-08 13:24:28.172638 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-08 13:24:28.172649 | orchestrator | Saturday 08 November 2025 13:24:25 +0000 (0:00:00.520) 0:00:19.761 ***** 2025-11-08 13:24:28.172660 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:24:28.172671 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:24:28.172681 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:24:28.172692 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:24:28.172702 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:24:28.172713 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:24:28.172723 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:24:28.172734 | orchestrator | 2025-11-08 13:24:28.172744 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-08 13:24:28.172755 | orchestrator | Saturday 08 November 2025 13:24:25 +0000 (0:00:00.228) 0:00:19.989 ***** 2025-11-08 13:24:28.172766 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:24:28.172777 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.172788 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:24:28.172798 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:28.172809 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:24:28.172820 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:28.172830 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:28.172841 | orchestrator | 2025-11-08 13:24:28.172852 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-08 13:24:28.172897 | orchestrator | Saturday 08 November 2025 13:24:26 +0000 (0:00:00.953) 0:00:20.943 ***** 2025-11-08 13:24:28.172909 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:24:28.172920 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:24:28.172931 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:24:28.172941 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:24:28.172952 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:28.172962 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:28.172973 | orchestrator | ok: [testbed-manager] 2025-11-08 13:24:28.172984 | orchestrator | 2025-11-08 13:24:28.172994 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-08 13:24:28.173005 | orchestrator | Saturday 08 November 2025 13:24:27 +0000 (0:00:00.518) 0:00:21.462 ***** 2025-11-08 13:24:28.173016 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:24:28.173027 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:24:28.173037 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:24:28.173048 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:24:28.173066 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.234346 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.237274 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:25:12.237299 | orchestrator | 2025-11-08 13:25:12.237311 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-08 13:25:12.237323 | orchestrator | Saturday 08 November 2025 13:24:28 +0000 (0:00:00.957) 0:00:22.419 ***** 2025-11-08 13:25:12.237333 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.237343 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.237353 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.237363 | orchestrator | changed: [testbed-manager] 2025-11-08 13:25:12.237373 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:25:12.237382 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:25:12.237392 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:25:12.237402 | orchestrator | 2025-11-08 13:25:12.237412 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-11-08 13:25:12.237421 | orchestrator | Saturday 08 November 2025 13:24:48 +0000 (0:00:20.535) 0:00:42.954 ***** 2025-11-08 13:25:12.237431 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:25:12.237440 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:25:12.237450 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:25:12.237460 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.237469 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.237479 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.237489 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.237498 | orchestrator | 2025-11-08 13:25:12.237508 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-11-08 13:25:12.237518 | orchestrator | Saturday 08 November 2025 13:24:48 +0000 (0:00:00.235) 0:00:43.190 ***** 2025-11-08 13:25:12.237528 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:25:12.237538 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:25:12.237548 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:25:12.237557 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.237567 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.237576 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.237586 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.237595 | orchestrator | 2025-11-08 13:25:12.237605 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-11-08 13:25:12.237615 | orchestrator | Saturday 08 November 2025 13:24:49 +0000 (0:00:00.221) 0:00:43.411 ***** 2025-11-08 13:25:12.237624 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:25:12.237634 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:25:12.237644 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:25:12.237653 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.237663 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.237672 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.237682 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.237691 | orchestrator | 2025-11-08 13:25:12.237701 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-11-08 13:25:12.237733 | orchestrator | Saturday 08 November 2025 13:24:49 +0000 (0:00:00.217) 0:00:43.629 ***** 2025-11-08 13:25:12.237744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:25:12.237768 | orchestrator | 2025-11-08 13:25:12.237779 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-11-08 13:25:12.237788 | orchestrator | Saturday 08 November 2025 13:24:49 +0000 (0:00:00.287) 0:00:43.916 ***** 2025-11-08 13:25:12.237798 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:25:12.237808 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.237817 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.237827 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:25:12.237836 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.237846 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:25:12.237855 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.237865 | orchestrator | 2025-11-08 13:25:12.237890 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-11-08 13:25:12.237915 | orchestrator | Saturday 08 November 2025 13:24:51 +0000 (0:00:01.546) 0:00:45.463 ***** 2025-11-08 13:25:12.237925 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:25:12.237935 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:25:12.237944 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:25:12.237954 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:25:12.237963 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:25:12.237973 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:25:12.237982 | orchestrator | changed: [testbed-manager] 2025-11-08 13:25:12.237992 | orchestrator | 2025-11-08 13:25:12.238001 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-11-08 13:25:12.238036 | orchestrator | Saturday 08 November 2025 13:24:52 +0000 (0:00:00.965) 0:00:46.428 ***** 2025-11-08 13:25:12.238049 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:25:12.238059 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:25:12.238069 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:25:12.238078 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.238088 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.238097 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.238106 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.238116 | orchestrator | 2025-11-08 13:25:12.238126 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-11-08 13:25:12.238136 | orchestrator | Saturday 08 November 2025 13:24:52 +0000 (0:00:00.752) 0:00:47.181 ***** 2025-11-08 13:25:12.238146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:25:12.238157 | orchestrator | 2025-11-08 13:25:12.238167 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-11-08 13:25:12.238177 | orchestrator | Saturday 08 November 2025 13:24:53 +0000 (0:00:00.284) 0:00:47.465 ***** 2025-11-08 13:25:12.238187 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:25:12.238196 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:25:12.238206 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:25:12.238216 | orchestrator | changed: [testbed-manager] 2025-11-08 13:25:12.238225 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:25:12.238235 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:25:12.238244 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:25:12.238254 | orchestrator | 2025-11-08 13:25:12.238280 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-11-08 13:25:12.238291 | orchestrator | Saturday 08 November 2025 13:24:54 +0000 (0:00:00.901) 0:00:48.367 ***** 2025-11-08 13:25:12.238301 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:25:12.238310 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:25:12.238329 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:25:12.238338 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:25:12.238348 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:25:12.238357 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:25:12.238367 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:25:12.238376 | orchestrator | 2025-11-08 13:25:12.238386 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2025-11-08 13:25:12.238396 | orchestrator | Saturday 08 November 2025 13:24:54 +0000 (0:00:00.245) 0:00:48.613 ***** 2025-11-08 13:25:12.238406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:25:12.238415 | orchestrator | 2025-11-08 13:25:12.238429 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2025-11-08 13:25:12.238439 | orchestrator | Saturday 08 November 2025 13:24:54 +0000 (0:00:00.282) 0:00:48.895 ***** 2025-11-08 13:25:12.238449 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.238458 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:25:12.238468 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.238477 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.238487 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:25:12.238496 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:25:12.238506 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.238515 | orchestrator | 2025-11-08 13:25:12.238525 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2025-11-08 13:25:12.238534 | orchestrator | Saturday 08 November 2025 13:24:56 +0000 (0:00:01.577) 0:00:50.472 ***** 2025-11-08 13:25:12.238544 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:25:12.238553 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:25:12.238563 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:25:12.238572 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:25:12.238581 | orchestrator | changed: [testbed-manager] 2025-11-08 13:25:12.238591 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:25:12.238600 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:25:12.238610 | orchestrator | 2025-11-08 13:25:12.238619 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-11-08 13:25:12.238629 | orchestrator | Saturday 08 November 2025 13:24:57 +0000 (0:00:01.046) 0:00:51.518 ***** 2025-11-08 13:25:12.238639 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:25:12.238648 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:25:12.238657 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:25:12.238667 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:25:12.238677 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:25:12.238686 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:25:12.238695 | orchestrator | changed: [testbed-manager] 2025-11-08 13:25:12.238705 | orchestrator | 2025-11-08 13:25:12.238715 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-11-08 13:25:12.238724 | orchestrator | Saturday 08 November 2025 13:25:09 +0000 (0:00:12.292) 0:01:03.811 ***** 2025-11-08 13:25:12.238734 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.238743 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:25:12.238753 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.238762 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:25:12.238772 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.238781 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:25:12.238790 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.238800 | orchestrator | 2025-11-08 13:25:12.238809 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-11-08 13:25:12.238819 | orchestrator | Saturday 08 November 2025 13:25:10 +0000 (0:00:01.097) 0:01:04.908 ***** 2025-11-08 13:25:12.238828 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:25:12.238838 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:25:12.238847 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:25:12.238862 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.238884 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.238894 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.238903 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.238913 | orchestrator | 2025-11-08 13:25:12.238923 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-11-08 13:25:12.238932 | orchestrator | Saturday 08 November 2025 13:25:11 +0000 (0:00:00.817) 0:01:05.726 ***** 2025-11-08 13:25:12.238942 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:25:12.238951 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:25:12.238961 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:25:12.238970 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.238979 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.238989 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.238999 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.239008 | orchestrator | 2025-11-08 13:25:12.239018 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-11-08 13:25:12.239027 | orchestrator | Saturday 08 November 2025 13:25:11 +0000 (0:00:00.210) 0:01:05.936 ***** 2025-11-08 13:25:12.239037 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:25:12.239046 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:25:12.239056 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:25:12.239065 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:25:12.239074 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:25:12.239084 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:25:12.239093 | orchestrator | ok: [testbed-manager] 2025-11-08 13:25:12.239102 | orchestrator | 2025-11-08 13:25:12.239112 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-11-08 13:25:12.239122 | orchestrator | Saturday 08 November 2025 13:25:11 +0000 (0:00:00.243) 0:01:06.180 ***** 2025-11-08 13:25:12.239132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:25:12.239141 | orchestrator | 2025-11-08 13:25:12.239158 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-11-08 13:27:19.203640 | orchestrator | Saturday 08 November 2025 13:25:12 +0000 (0:00:00.301) 0:01:06.481 ***** 2025-11-08 13:27:19.203772 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:19.203799 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:19.203819 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:19.203835 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:19.203847 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:19.203907 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:19.203927 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:19.203947 | orchestrator | 2025-11-08 13:27:19.203968 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-11-08 13:27:19.203986 | orchestrator | Saturday 08 November 2025 13:25:13 +0000 (0:00:01.439) 0:01:07.921 ***** 2025-11-08 13:27:19.204006 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:27:19.204026 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:27:19.204045 | orchestrator | changed: [testbed-manager] 2025-11-08 13:27:19.204066 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:27:19.204084 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:27:19.204102 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:27:19.204120 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:27:19.204139 | orchestrator | 2025-11-08 13:27:19.204158 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-11-08 13:27:19.204198 | orchestrator | Saturday 08 November 2025 13:25:14 +0000 (0:00:00.539) 0:01:08.461 ***** 2025-11-08 13:27:19.204213 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:19.204225 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:19.204238 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:19.204253 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:19.204273 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:19.204321 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:19.204334 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:19.204346 | orchestrator | 2025-11-08 13:27:19.204364 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-11-08 13:27:19.204383 | orchestrator | Saturday 08 November 2025 13:25:14 +0000 (0:00:00.212) 0:01:08.673 ***** 2025-11-08 13:27:19.204404 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:19.204424 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:19.204442 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:19.204460 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:19.204472 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:19.204483 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:19.204497 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:19.204516 | orchestrator | 2025-11-08 13:27:19.204537 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-11-08 13:27:19.204555 | orchestrator | Saturday 08 November 2025 13:25:15 +0000 (0:00:01.034) 0:01:09.708 ***** 2025-11-08 13:27:19.204574 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:27:19.204585 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:27:19.204596 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:27:19.204607 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:27:19.204618 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:27:19.204630 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:27:19.204649 | orchestrator | changed: [testbed-manager] 2025-11-08 13:27:19.204668 | orchestrator | 2025-11-08 13:27:19.204686 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-11-08 13:27:19.204705 | orchestrator | Saturday 08 November 2025 13:25:16 +0000 (0:00:01.422) 0:01:11.130 ***** 2025-11-08 13:27:19.204724 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:19.204742 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:19.204762 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:19.204780 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:19.204799 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:19.204818 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:19.204838 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:19.204856 | orchestrator | 2025-11-08 13:27:19.204894 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-11-08 13:27:19.204906 | orchestrator | Saturday 08 November 2025 13:25:18 +0000 (0:00:01.921) 0:01:13.051 ***** 2025-11-08 13:27:19.204917 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:19.204928 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:19.204938 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:19.204949 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:19.204960 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:19.204971 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:19.204981 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:19.204992 | orchestrator | 2025-11-08 13:27:19.205003 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-11-08 13:27:19.205014 | orchestrator | Saturday 08 November 2025 13:25:48 +0000 (0:00:29.228) 0:01:42.280 ***** 2025-11-08 13:27:19.205025 | orchestrator | changed: [testbed-manager] 2025-11-08 13:27:19.205036 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:27:19.205047 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:27:19.205058 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:27:19.205069 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:27:19.205080 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:27:19.205091 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:27:19.205102 | orchestrator | 2025-11-08 13:27:19.205113 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-11-08 13:27:19.205124 | orchestrator | Saturday 08 November 2025 13:27:03 +0000 (0:01:15.950) 0:02:58.230 ***** 2025-11-08 13:27:19.205135 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:19.205146 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:19.205157 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:19.205168 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:19.205190 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:19.205200 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:19.205211 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:19.205222 | orchestrator | 2025-11-08 13:27:19.205233 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-11-08 13:27:19.205244 | orchestrator | Saturday 08 November 2025 13:27:05 +0000 (0:00:01.867) 0:03:00.097 ***** 2025-11-08 13:27:19.205255 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:19.205265 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:19.205276 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:19.205287 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:19.205298 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:19.205308 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:19.205319 | orchestrator | changed: [testbed-manager] 2025-11-08 13:27:19.205330 | orchestrator | 2025-11-08 13:27:19.205341 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-11-08 13:27:19.205352 | orchestrator | Saturday 08 November 2025 13:27:18 +0000 (0:00:12.202) 0:03:12.300 ***** 2025-11-08 13:27:19.205396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-11-08 13:27:19.205414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-11-08 13:27:19.205429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-11-08 13:27:19.205449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-11-08 13:27:19.205460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-11-08 13:27:19.205472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-11-08 13:27:19.205483 | orchestrator | 2025-11-08 13:27:19.205494 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-11-08 13:27:19.205506 | orchestrator | Saturday 08 November 2025 13:27:18 +0000 (0:00:00.405) 0:03:12.705 ***** 2025-11-08 13:27:19.205517 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-08 13:27:19.205535 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-08 13:27:19.205546 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:27:19.205557 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-08 13:27:19.205568 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:27:19.205579 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:27:19.205590 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-08 13:27:19.205601 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:27:19.205616 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-08 13:27:19.205627 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-08 13:27:19.205638 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-08 13:27:19.205649 | orchestrator | 2025-11-08 13:27:19.205668 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-11-08 13:27:19.205679 | orchestrator | Saturday 08 November 2025 13:27:19 +0000 (0:00:00.612) 0:03:13.317 ***** 2025-11-08 13:27:19.205690 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-08 13:27:19.205702 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-08 13:27:19.205713 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-08 13:27:19.205724 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-08 13:27:19.205735 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-08 13:27:19.205753 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-08 13:27:24.692245 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-08 13:27:24.692373 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-08 13:27:24.692388 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-08 13:27:24.692402 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-08 13:27:24.692413 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-08 13:27:24.692424 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-08 13:27:24.692435 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-08 13:27:24.692446 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-08 13:27:24.692477 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-08 13:27:24.692488 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-08 13:27:24.692499 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-08 13:27:24.692511 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-08 13:27:24.692522 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-08 13:27:24.692533 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-08 13:27:24.692544 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-08 13:27:24.692555 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-08 13:27:24.692566 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-08 13:27:24.692606 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-08 13:27:24.692617 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-08 13:27:24.692628 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-08 13:27:24.692639 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-08 13:27:24.692650 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-08 13:27:24.692661 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-08 13:27:24.692671 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-08 13:27:24.692682 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:27:24.692695 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:27:24.692705 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:27:24.692716 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-08 13:27:24.692727 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-08 13:27:24.692738 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-08 13:27:24.692748 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-08 13:27:24.692759 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-08 13:27:24.692772 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-08 13:27:24.692787 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-08 13:27:24.692806 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-08 13:27:24.692826 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-08 13:27:24.692843 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-08 13:27:24.692862 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:27:24.692919 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-08 13:27:24.692937 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-08 13:27:24.692956 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-08 13:27:24.692967 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-08 13:27:24.692978 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-08 13:27:24.693008 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-08 13:27:24.693020 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-08 13:27:24.693031 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-08 13:27:24.693042 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-08 13:27:24.693053 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-08 13:27:24.693063 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-08 13:27:24.693074 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-08 13:27:24.693095 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-08 13:27:24.693112 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-08 13:27:24.693124 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-08 13:27:24.693135 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-08 13:27:24.693146 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-08 13:27:24.693157 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-08 13:27:24.693168 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-08 13:27:24.693179 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-08 13:27:24.693189 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-08 13:27:24.693200 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-08 13:27:24.693210 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-08 13:27:24.693221 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-08 13:27:24.693232 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-08 13:27:24.693243 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-08 13:27:24.693253 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-08 13:27:24.693264 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-08 13:27:24.693275 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-08 13:27:24.693285 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-08 13:27:24.693296 | orchestrator | 2025-11-08 13:27:24.693308 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-11-08 13:27:24.693319 | orchestrator | Saturday 08 November 2025 13:27:23 +0000 (0:00:04.460) 0:03:17.778 ***** 2025-11-08 13:27:24.693330 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-08 13:27:24.693341 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-08 13:27:24.693351 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-08 13:27:24.693362 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-08 13:27:24.693373 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-08 13:27:24.693383 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-08 13:27:24.693394 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-08 13:27:24.693405 | orchestrator | 2025-11-08 13:27:24.693415 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-11-08 13:27:24.693426 | orchestrator | Saturday 08 November 2025 13:27:24 +0000 (0:00:00.578) 0:03:18.357 ***** 2025-11-08 13:27:24.693437 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-08 13:27:24.693448 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-08 13:27:24.693459 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:27:24.693469 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-08 13:27:24.693480 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:27:24.693499 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:27:24.693510 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-08 13:27:24.693521 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:27:24.693532 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-08 13:27:24.693543 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-08 13:27:24.693566 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-08 13:27:39.579618 | orchestrator | 2025-11-08 13:27:39.581649 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-11-08 13:27:39.581692 | orchestrator | Saturday 08 November 2025 13:27:24 +0000 (0:00:00.588) 0:03:18.945 ***** 2025-11-08 13:27:39.581705 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-08 13:27:39.581718 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-08 13:27:39.581730 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:27:39.581742 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-08 13:27:39.581754 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:27:39.581765 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:27:39.581796 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-08 13:27:39.581807 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:27:39.581818 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-08 13:27:39.581830 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-08 13:27:39.581842 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-08 13:27:39.581853 | orchestrator | 2025-11-08 13:27:39.581893 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-11-08 13:27:39.581905 | orchestrator | Saturday 08 November 2025 13:27:25 +0000 (0:00:00.456) 0:03:19.402 ***** 2025-11-08 13:27:39.581917 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-08 13:27:39.581928 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:27:39.581939 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-08 13:27:39.581950 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-08 13:27:39.581961 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:27:39.581972 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:27:39.581983 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-08 13:27:39.581995 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:27:39.582006 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-08 13:27:39.582056 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-08 13:27:39.582070 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-08 13:27:39.582082 | orchestrator | 2025-11-08 13:27:39.582093 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-11-08 13:27:39.582104 | orchestrator | Saturday 08 November 2025 13:27:26 +0000 (0:00:01.637) 0:03:21.039 ***** 2025-11-08 13:27:39.582115 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:27:39.582153 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:27:39.582166 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:27:39.582205 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:27:39.582216 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:27:39.582227 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:27:39.582238 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:27:39.582248 | orchestrator | 2025-11-08 13:27:39.582259 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-11-08 13:27:39.582270 | orchestrator | Saturday 08 November 2025 13:27:27 +0000 (0:00:00.268) 0:03:21.307 ***** 2025-11-08 13:27:39.582281 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:39.582293 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:39.582304 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:39.582315 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:39.582326 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:39.582337 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:39.582347 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:39.582358 | orchestrator | 2025-11-08 13:27:39.582369 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-11-08 13:27:39.582380 | orchestrator | Saturday 08 November 2025 13:27:33 +0000 (0:00:05.961) 0:03:27.269 ***** 2025-11-08 13:27:39.582391 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-11-08 13:27:39.582402 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:27:39.582412 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-11-08 13:27:39.582423 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-11-08 13:27:39.582434 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:27:39.582445 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-11-08 13:27:39.582455 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:27:39.582466 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:27:39.582476 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-11-08 13:27:39.582487 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-11-08 13:27:39.582498 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:27:39.582508 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:27:39.582519 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-11-08 13:27:39.582530 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:27:39.582540 | orchestrator | 2025-11-08 13:27:39.582551 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-11-08 13:27:39.582562 | orchestrator | Saturday 08 November 2025 13:27:33 +0000 (0:00:00.291) 0:03:27.561 ***** 2025-11-08 13:27:39.582573 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-11-08 13:27:39.582584 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-11-08 13:27:39.582595 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-11-08 13:27:39.582627 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-11-08 13:27:39.582639 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-11-08 13:27:39.582650 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-11-08 13:27:39.582661 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-11-08 13:27:39.582671 | orchestrator | 2025-11-08 13:27:39.582682 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-11-08 13:27:39.582693 | orchestrator | Saturday 08 November 2025 13:27:35 +0000 (0:00:02.009) 0:03:29.570 ***** 2025-11-08 13:27:39.582707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:27:39.582720 | orchestrator | 2025-11-08 13:27:39.582731 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-11-08 13:27:39.582742 | orchestrator | Saturday 08 November 2025 13:27:35 +0000 (0:00:00.375) 0:03:29.946 ***** 2025-11-08 13:27:39.582753 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:39.582764 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:39.582774 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:39.582785 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:39.582796 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:39.582815 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:39.582826 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:39.582837 | orchestrator | 2025-11-08 13:27:39.582847 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-11-08 13:27:39.582858 | orchestrator | Saturday 08 November 2025 13:27:36 +0000 (0:00:01.211) 0:03:31.157 ***** 2025-11-08 13:27:39.582888 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:39.582899 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:39.582910 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:39.582921 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:39.582931 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:39.582942 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:39.582952 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:39.582963 | orchestrator | 2025-11-08 13:27:39.582974 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-11-08 13:27:39.582985 | orchestrator | Saturday 08 November 2025 13:27:37 +0000 (0:00:00.572) 0:03:31.730 ***** 2025-11-08 13:27:39.582996 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:27:39.583007 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:27:39.583017 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:27:39.583028 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:27:39.583048 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:27:39.583059 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:27:39.583070 | orchestrator | changed: [testbed-manager] 2025-11-08 13:27:39.583081 | orchestrator | 2025-11-08 13:27:39.583092 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-11-08 13:27:39.583102 | orchestrator | Saturday 08 November 2025 13:27:38 +0000 (0:00:00.618) 0:03:32.349 ***** 2025-11-08 13:27:39.583113 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:39.583124 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:39.583135 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:39.583146 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:39.583156 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:39.583167 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:39.583178 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:39.583189 | orchestrator | 2025-11-08 13:27:39.583199 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-11-08 13:27:39.583210 | orchestrator | Saturday 08 November 2025 13:27:38 +0000 (0:00:00.571) 0:03:32.920 ***** 2025-11-08 13:27:39.583224 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762606943.9050066, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:39.583239 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762606931.8650186, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:39.583251 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762606940.1708803, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:39.583296 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762606958.173329, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.222924 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762606950.3398848, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.223034 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762606949.4071448, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.223051 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762606916.0864766, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.223064 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.223076 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.223088 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.223123 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.223159 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.223172 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.223184 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 13:27:44.223196 | orchestrator | 2025-11-08 13:27:44.223210 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-11-08 13:27:44.223222 | orchestrator | Saturday 08 November 2025 13:27:39 +0000 (0:00:00.911) 0:03:33.831 ***** 2025-11-08 13:27:44.223233 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:27:44.223245 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:27:44.223256 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:27:44.223266 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:27:44.223277 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:27:44.223287 | orchestrator | changed: [testbed-manager] 2025-11-08 13:27:44.223298 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:27:44.223309 | orchestrator | 2025-11-08 13:27:44.223320 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-11-08 13:27:44.223330 | orchestrator | Saturday 08 November 2025 13:27:40 +0000 (0:00:01.048) 0:03:34.879 ***** 2025-11-08 13:27:44.223341 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:27:44.223352 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:27:44.223384 | orchestrator | changed: [testbed-manager] 2025-11-08 13:27:44.223395 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:27:44.223407 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:27:44.223419 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:27:44.223431 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:27:44.223442 | orchestrator | 2025-11-08 13:27:44.223455 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-11-08 13:27:44.223467 | orchestrator | Saturday 08 November 2025 13:27:41 +0000 (0:00:01.009) 0:03:35.889 ***** 2025-11-08 13:27:44.223488 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:27:44.223499 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:27:44.223511 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:27:44.223523 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:27:44.223535 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:27:44.223547 | orchestrator | changed: [testbed-manager] 2025-11-08 13:27:44.223558 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:27:44.223570 | orchestrator | 2025-11-08 13:27:44.223582 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-11-08 13:27:44.223595 | orchestrator | Saturday 08 November 2025 13:27:42 +0000 (0:00:01.148) 0:03:37.038 ***** 2025-11-08 13:27:44.223606 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:27:44.223617 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:27:44.223628 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:27:44.223638 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:27:44.223649 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:27:44.223660 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:27:44.223671 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:27:44.223683 | orchestrator | 2025-11-08 13:27:44.223694 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-11-08 13:27:44.223705 | orchestrator | Saturday 08 November 2025 13:27:43 +0000 (0:00:00.266) 0:03:37.304 ***** 2025-11-08 13:27:44.223716 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:27:44.223729 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:27:44.223740 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:27:44.223750 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:27:44.223761 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:27:44.223772 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:27:44.223783 | orchestrator | ok: [testbed-manager] 2025-11-08 13:27:44.223794 | orchestrator | 2025-11-08 13:27:44.223805 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-11-08 13:27:44.223816 | orchestrator | Saturday 08 November 2025 13:27:43 +0000 (0:00:00.705) 0:03:38.009 ***** 2025-11-08 13:27:44.223828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:27:44.223842 | orchestrator | 2025-11-08 13:27:44.223854 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-11-08 13:27:44.223905 | orchestrator | Saturday 08 November 2025 13:27:44 +0000 (0:00:00.465) 0:03:38.475 ***** 2025-11-08 13:29:02.665338 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:02.665423 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:29:02.665436 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:29:02.665446 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:29:02.665463 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:29:02.665473 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:29:02.665483 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:29:02.665493 | orchestrator | 2025-11-08 13:29:02.665504 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-11-08 13:29:02.665515 | orchestrator | Saturday 08 November 2025 13:27:53 +0000 (0:00:08.975) 0:03:47.450 ***** 2025-11-08 13:29:02.665525 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:02.665535 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:02.665545 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:02.665554 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:02.665564 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:02.665574 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:02.665583 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:02.665593 | orchestrator | 2025-11-08 13:29:02.665603 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-11-08 13:29:02.665613 | orchestrator | Saturday 08 November 2025 13:27:54 +0000 (0:00:01.368) 0:03:48.819 ***** 2025-11-08 13:29:02.665637 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:02.665648 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:02.665657 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:02.665667 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:02.665676 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:02.665686 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:02.665696 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:02.665705 | orchestrator | 2025-11-08 13:29:02.665715 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-11-08 13:29:02.665725 | orchestrator | Saturday 08 November 2025 13:27:55 +0000 (0:00:00.998) 0:03:49.818 ***** 2025-11-08 13:29:02.665735 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:02.665744 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:02.665754 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:02.665763 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:02.665773 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:02.665782 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:02.665792 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:02.665802 | orchestrator | 2025-11-08 13:29:02.665811 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-11-08 13:29:02.665822 | orchestrator | Saturday 08 November 2025 13:27:55 +0000 (0:00:00.359) 0:03:50.177 ***** 2025-11-08 13:29:02.665831 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:02.665841 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:02.665875 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:02.665886 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:02.665895 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:02.665905 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:02.665916 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:02.665926 | orchestrator | 2025-11-08 13:29:02.665937 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-11-08 13:29:02.665947 | orchestrator | Saturday 08 November 2025 13:27:56 +0000 (0:00:00.284) 0:03:50.462 ***** 2025-11-08 13:29:02.665959 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:02.665970 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:02.665981 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:02.665991 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:02.666001 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:02.666012 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:02.666066 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:02.666077 | orchestrator | 2025-11-08 13:29:02.666088 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-11-08 13:29:02.666100 | orchestrator | Saturday 08 November 2025 13:27:56 +0000 (0:00:00.308) 0:03:50.770 ***** 2025-11-08 13:29:02.666111 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:02.666121 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:02.666132 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:02.666143 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:02.666154 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:02.666165 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:02.666176 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:02.666187 | orchestrator | 2025-11-08 13:29:02.666198 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-11-08 13:29:02.666209 | orchestrator | Saturday 08 November 2025 13:28:01 +0000 (0:00:05.240) 0:03:56.011 ***** 2025-11-08 13:29:02.666221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:29:02.666235 | orchestrator | 2025-11-08 13:29:02.666246 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-11-08 13:29:02.666258 | orchestrator | Saturday 08 November 2025 13:28:02 +0000 (0:00:00.400) 0:03:56.411 ***** 2025-11-08 13:29:02.666268 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-11-08 13:29:02.666278 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-11-08 13:29:02.666294 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-11-08 13:29:02.666304 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:29:02.666314 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-11-08 13:29:02.666323 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-11-08 13:29:02.666333 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-11-08 13:29:02.666342 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:29:02.666352 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-11-08 13:29:02.666362 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-11-08 13:29:02.666371 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:29:02.666381 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-11-08 13:29:02.666390 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-11-08 13:29:02.666400 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:29:02.666409 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:29:02.666419 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-11-08 13:29:02.666442 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-11-08 13:29:02.666452 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:29:02.666462 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-11-08 13:29:02.666472 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-11-08 13:29:02.666481 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:29:02.666491 | orchestrator | 2025-11-08 13:29:02.666501 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-11-08 13:29:02.666510 | orchestrator | Saturday 08 November 2025 13:28:02 +0000 (0:00:00.320) 0:03:56.732 ***** 2025-11-08 13:29:02.666521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:29:02.666531 | orchestrator | 2025-11-08 13:29:02.666540 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-11-08 13:29:02.666550 | orchestrator | Saturday 08 November 2025 13:28:02 +0000 (0:00:00.377) 0:03:57.110 ***** 2025-11-08 13:29:02.666560 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-11-08 13:29:02.666570 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:29:02.666580 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-11-08 13:29:02.666589 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-11-08 13:29:02.666599 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:29:02.666609 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-11-08 13:29:02.666618 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:29:02.666628 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-11-08 13:29:02.666638 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:29:02.666654 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-11-08 13:29:02.666664 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:29:02.666674 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:29:02.666684 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-11-08 13:29:02.666693 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:29:02.666703 | orchestrator | 2025-11-08 13:29:02.666713 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-11-08 13:29:02.666722 | orchestrator | Saturday 08 November 2025 13:28:03 +0000 (0:00:00.312) 0:03:57.422 ***** 2025-11-08 13:29:02.666732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:29:02.666747 | orchestrator | 2025-11-08 13:29:02.666757 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-11-08 13:29:02.666766 | orchestrator | Saturday 08 November 2025 13:28:03 +0000 (0:00:00.404) 0:03:57.827 ***** 2025-11-08 13:29:02.666776 | orchestrator | changed: [testbed-manager] 2025-11-08 13:29:02.666786 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:29:02.666796 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:29:02.666805 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:29:02.666815 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:29:02.666825 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:29:02.666834 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:29:02.666844 | orchestrator | 2025-11-08 13:29:02.666868 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-11-08 13:29:02.666878 | orchestrator | Saturday 08 November 2025 13:28:38 +0000 (0:00:35.172) 0:04:33.000 ***** 2025-11-08 13:29:02.666888 | orchestrator | changed: [testbed-manager] 2025-11-08 13:29:02.666898 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:29:02.666907 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:29:02.666917 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:29:02.666927 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:29:02.666936 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:29:02.666945 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:29:02.666955 | orchestrator | 2025-11-08 13:29:02.666965 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-11-08 13:29:02.666975 | orchestrator | Saturday 08 November 2025 13:28:47 +0000 (0:00:08.634) 0:04:41.634 ***** 2025-11-08 13:29:02.666984 | orchestrator | changed: [testbed-manager] 2025-11-08 13:29:02.666994 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:29:02.667003 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:29:02.667013 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:29:02.667022 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:29:02.667032 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:29:02.667041 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:29:02.667051 | orchestrator | 2025-11-08 13:29:02.667061 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-11-08 13:29:02.667070 | orchestrator | Saturday 08 November 2025 13:28:55 +0000 (0:00:07.858) 0:04:49.493 ***** 2025-11-08 13:29:02.667080 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:02.667090 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:02.667099 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:02.667109 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:02.667118 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:02.667128 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:02.667137 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:02.667147 | orchestrator | 2025-11-08 13:29:02.667157 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-11-08 13:29:02.667166 | orchestrator | Saturday 08 November 2025 13:28:56 +0000 (0:00:01.719) 0:04:51.212 ***** 2025-11-08 13:29:02.667176 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:29:02.667185 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:29:02.667195 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:29:02.667205 | orchestrator | changed: [testbed-manager] 2025-11-08 13:29:02.667214 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:29:02.667224 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:29:02.667233 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:29:02.667243 | orchestrator | 2025-11-08 13:29:02.667258 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-11-08 13:29:13.104811 | orchestrator | Saturday 08 November 2025 13:29:02 +0000 (0:00:05.694) 0:04:56.906 ***** 2025-11-08 13:29:13.105181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:29:13.105237 | orchestrator | 2025-11-08 13:29:13.105253 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-11-08 13:29:13.105293 | orchestrator | Saturday 08 November 2025 13:29:03 +0000 (0:00:00.403) 0:04:57.310 ***** 2025-11-08 13:29:13.105305 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:29:13.105317 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:29:13.105328 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:29:13.105338 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:29:13.105349 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:29:13.105360 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:29:13.105373 | orchestrator | changed: [testbed-manager] 2025-11-08 13:29:13.105392 | orchestrator | 2025-11-08 13:29:13.105409 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-11-08 13:29:13.105426 | orchestrator | Saturday 08 November 2025 13:29:03 +0000 (0:00:00.727) 0:04:58.037 ***** 2025-11-08 13:29:13.105444 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:13.105461 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:13.105476 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:13.105494 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:13.105514 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:13.105532 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:13.105551 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:13.105563 | orchestrator | 2025-11-08 13:29:13.105573 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-11-08 13:29:13.105584 | orchestrator | Saturday 08 November 2025 13:29:05 +0000 (0:00:01.477) 0:04:59.514 ***** 2025-11-08 13:29:13.105595 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:29:13.105606 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:29:13.105617 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:29:13.105628 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:29:13.105639 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:29:13.105650 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:29:13.105661 | orchestrator | changed: [testbed-manager] 2025-11-08 13:29:13.105672 | orchestrator | 2025-11-08 13:29:13.105683 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-11-08 13:29:13.105694 | orchestrator | Saturday 08 November 2025 13:29:06 +0000 (0:00:00.779) 0:05:00.294 ***** 2025-11-08 13:29:13.105705 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:29:13.105716 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:29:13.105727 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:29:13.105737 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:29:13.105748 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:29:13.105759 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:29:13.105770 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:29:13.105781 | orchestrator | 2025-11-08 13:29:13.105792 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-11-08 13:29:13.105803 | orchestrator | Saturday 08 November 2025 13:29:06 +0000 (0:00:00.261) 0:05:00.555 ***** 2025-11-08 13:29:13.105814 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:29:13.105825 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:29:13.105836 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:29:13.105892 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:29:13.105904 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:29:13.105915 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:29:13.105926 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:29:13.105937 | orchestrator | 2025-11-08 13:29:13.105948 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-11-08 13:29:13.105959 | orchestrator | Saturday 08 November 2025 13:29:06 +0000 (0:00:00.401) 0:05:00.957 ***** 2025-11-08 13:29:13.105970 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:13.105981 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:13.105992 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:13.106003 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:13.106084 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:13.106100 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:13.106124 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:13.106135 | orchestrator | 2025-11-08 13:29:13.106146 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-11-08 13:29:13.106157 | orchestrator | Saturday 08 November 2025 13:29:06 +0000 (0:00:00.290) 0:05:01.248 ***** 2025-11-08 13:29:13.106167 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:29:13.106178 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:29:13.106190 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:29:13.106200 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:29:13.106211 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:29:13.106222 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:29:13.106232 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:29:13.106243 | orchestrator | 2025-11-08 13:29:13.106254 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-11-08 13:29:13.106267 | orchestrator | Saturday 08 November 2025 13:29:07 +0000 (0:00:00.244) 0:05:01.492 ***** 2025-11-08 13:29:13.106278 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:13.106288 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:13.106299 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:13.106310 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:13.106321 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:13.106331 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:13.106342 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:13.106353 | orchestrator | 2025-11-08 13:29:13.106364 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-11-08 13:29:13.106375 | orchestrator | Saturday 08 November 2025 13:29:07 +0000 (0:00:00.303) 0:05:01.796 ***** 2025-11-08 13:29:13.106386 | orchestrator | ok: [testbed-node-0] =>  2025-11-08 13:29:13.106397 | orchestrator |  docker_version: 5:27.5.1 2025-11-08 13:29:13.106408 | orchestrator | ok: [testbed-node-1] =>  2025-11-08 13:29:13.106419 | orchestrator |  docker_version: 5:27.5.1 2025-11-08 13:29:13.106429 | orchestrator | ok: [testbed-node-2] =>  2025-11-08 13:29:13.106441 | orchestrator |  docker_version: 5:27.5.1 2025-11-08 13:29:13.106452 | orchestrator | ok: [testbed-node-3] =>  2025-11-08 13:29:13.106462 | orchestrator |  docker_version: 5:27.5.1 2025-11-08 13:29:13.106506 | orchestrator | ok: [testbed-node-4] =>  2025-11-08 13:29:13.106518 | orchestrator |  docker_version: 5:27.5.1 2025-11-08 13:29:13.106529 | orchestrator | ok: [testbed-node-5] =>  2025-11-08 13:29:13.106548 | orchestrator |  docker_version: 5:27.5.1 2025-11-08 13:29:13.106559 | orchestrator | ok: [testbed-manager] =>  2025-11-08 13:29:13.106570 | orchestrator |  docker_version: 5:27.5.1 2025-11-08 13:29:13.106581 | orchestrator | 2025-11-08 13:29:13.106593 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-11-08 13:29:13.106604 | orchestrator | Saturday 08 November 2025 13:29:07 +0000 (0:00:00.250) 0:05:02.046 ***** 2025-11-08 13:29:13.106615 | orchestrator | ok: [testbed-node-0] =>  2025-11-08 13:29:13.106625 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-08 13:29:13.106636 | orchestrator | ok: [testbed-node-1] =>  2025-11-08 13:29:13.106647 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-08 13:29:13.106658 | orchestrator | ok: [testbed-node-2] =>  2025-11-08 13:29:13.106669 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-08 13:29:13.106680 | orchestrator | ok: [testbed-node-3] =>  2025-11-08 13:29:13.106691 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-08 13:29:13.106701 | orchestrator | ok: [testbed-node-4] =>  2025-11-08 13:29:13.106712 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-08 13:29:13.106723 | orchestrator | ok: [testbed-node-5] =>  2025-11-08 13:29:13.106734 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-08 13:29:13.106745 | orchestrator | ok: [testbed-manager] =>  2025-11-08 13:29:13.106756 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-08 13:29:13.106766 | orchestrator | 2025-11-08 13:29:13.106777 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-11-08 13:29:13.106788 | orchestrator | Saturday 08 November 2025 13:29:08 +0000 (0:00:00.291) 0:05:02.338 ***** 2025-11-08 13:29:13.106807 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:29:13.106818 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:29:13.106829 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:29:13.106840 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:29:13.106872 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:29:13.106883 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:29:13.106895 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:29:13.106905 | orchestrator | 2025-11-08 13:29:13.106916 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-11-08 13:29:13.106927 | orchestrator | Saturday 08 November 2025 13:29:08 +0000 (0:00:00.368) 0:05:02.706 ***** 2025-11-08 13:29:13.106938 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:29:13.106949 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:29:13.106959 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:29:13.106970 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:29:13.106981 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:29:13.106992 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:29:13.107002 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:29:13.107013 | orchestrator | 2025-11-08 13:29:13.107023 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-11-08 13:29:13.107034 | orchestrator | Saturday 08 November 2025 13:29:08 +0000 (0:00:00.278) 0:05:02.984 ***** 2025-11-08 13:29:13.107047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:29:13.107061 | orchestrator | 2025-11-08 13:29:13.107072 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-11-08 13:29:13.107083 | orchestrator | Saturday 08 November 2025 13:29:09 +0000 (0:00:00.386) 0:05:03.371 ***** 2025-11-08 13:29:13.107094 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:13.107105 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:13.107116 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:13.107126 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:13.107138 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:13.107149 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:13.107159 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:13.107170 | orchestrator | 2025-11-08 13:29:13.107181 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-11-08 13:29:13.107191 | orchestrator | Saturday 08 November 2025 13:29:09 +0000 (0:00:00.783) 0:05:04.155 ***** 2025-11-08 13:29:13.107202 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:29:13.107213 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:29:13.107224 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:29:13.107234 | orchestrator | ok: [testbed-manager] 2025-11-08 13:29:13.107245 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:29:13.107255 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:29:13.107266 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:29:13.107276 | orchestrator | 2025-11-08 13:29:13.107287 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-11-08 13:29:13.107300 | orchestrator | Saturday 08 November 2025 13:29:12 +0000 (0:00:02.710) 0:05:06.865 ***** 2025-11-08 13:29:13.107311 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-11-08 13:29:13.107322 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-11-08 13:29:13.107333 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-11-08 13:29:13.107344 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-11-08 13:29:13.107354 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-11-08 13:29:13.107365 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-11-08 13:29:13.107376 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:29:13.107386 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-11-08 13:29:13.107397 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-11-08 13:29:13.107415 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-11-08 13:29:13.107426 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:29:13.107436 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-11-08 13:29:13.107447 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-11-08 13:29:13.107457 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-11-08 13:29:13.107468 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:29:13.107479 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-11-08 13:29:13.107497 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-11-08 13:30:14.109639 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:14.109781 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-11-08 13:30:14.109793 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-11-08 13:30:14.109804 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-11-08 13:30:14.109811 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-11-08 13:30:14.109838 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:14.109846 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:14.109854 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-11-08 13:30:14.109861 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-11-08 13:30:14.109868 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-11-08 13:30:14.109876 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:14.109884 | orchestrator | 2025-11-08 13:30:14.109894 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-11-08 13:30:14.109904 | orchestrator | Saturday 08 November 2025 13:29:13 +0000 (0:00:00.940) 0:05:07.806 ***** 2025-11-08 13:30:14.109912 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:14.109921 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.109929 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.109937 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.109945 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.109953 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.109961 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.109968 | orchestrator | 2025-11-08 13:30:14.109976 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-11-08 13:30:14.109984 | orchestrator | Saturday 08 November 2025 13:29:20 +0000 (0:00:06.946) 0:05:14.753 ***** 2025-11-08 13:30:14.109992 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.110001 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.110009 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.110060 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.110068 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.110076 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.110083 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:14.110090 | orchestrator | 2025-11-08 13:30:14.110097 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-11-08 13:30:14.110105 | orchestrator | Saturday 08 November 2025 13:29:21 +0000 (0:00:01.042) 0:05:15.796 ***** 2025-11-08 13:30:14.110112 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:14.110120 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.110126 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.110134 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.110141 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.110148 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.110155 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.110161 | orchestrator | 2025-11-08 13:30:14.110168 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-11-08 13:30:14.110176 | orchestrator | Saturday 08 November 2025 13:29:29 +0000 (0:00:08.130) 0:05:23.926 ***** 2025-11-08 13:30:14.110184 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.110191 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.110230 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.110240 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.110248 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.110255 | orchestrator | changed: [testbed-manager] 2025-11-08 13:30:14.110263 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.110271 | orchestrator | 2025-11-08 13:30:14.110279 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-11-08 13:30:14.110288 | orchestrator | Saturday 08 November 2025 13:29:32 +0000 (0:00:03.297) 0:05:27.224 ***** 2025-11-08 13:30:14.110297 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.110306 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.110314 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.110323 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.110331 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.110340 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:14.110348 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.110357 | orchestrator | 2025-11-08 13:30:14.110365 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-11-08 13:30:14.110373 | orchestrator | Saturday 08 November 2025 13:29:34 +0000 (0:00:01.442) 0:05:28.666 ***** 2025-11-08 13:30:14.110381 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.110390 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.110398 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.110406 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.110415 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.110423 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:14.110432 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.110440 | orchestrator | 2025-11-08 13:30:14.110449 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-11-08 13:30:14.110458 | orchestrator | Saturday 08 November 2025 13:29:35 +0000 (0:00:01.207) 0:05:29.874 ***** 2025-11-08 13:30:14.110467 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:14.110476 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:14.110484 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:14.110492 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:14.110499 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:14.110507 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:14.110516 | orchestrator | changed: [testbed-manager] 2025-11-08 13:30:14.110524 | orchestrator | 2025-11-08 13:30:14.110533 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-11-08 13:30:14.110542 | orchestrator | Saturday 08 November 2025 13:29:36 +0000 (0:00:01.001) 0:05:30.875 ***** 2025-11-08 13:30:14.110550 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:14.110557 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.110564 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.110572 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.110579 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.110586 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.110593 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.110600 | orchestrator | 2025-11-08 13:30:14.110608 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-11-08 13:30:14.110637 | orchestrator | Saturday 08 November 2025 13:29:46 +0000 (0:00:09.991) 0:05:40.866 ***** 2025-11-08 13:30:14.110646 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.110653 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.110661 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.110668 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.110676 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.110683 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.110691 | orchestrator | changed: [testbed-manager] 2025-11-08 13:30:14.110699 | orchestrator | 2025-11-08 13:30:14.110708 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-11-08 13:30:14.110716 | orchestrator | Saturday 08 November 2025 13:29:47 +0000 (0:00:00.890) 0:05:41.757 ***** 2025-11-08 13:30:14.110736 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:14.110745 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.110753 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.110761 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.110770 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.110778 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.110787 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.110794 | orchestrator | 2025-11-08 13:30:14.110802 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-11-08 13:30:14.110811 | orchestrator | Saturday 08 November 2025 13:29:56 +0000 (0:00:08.810) 0:05:50.567 ***** 2025-11-08 13:30:14.110837 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:14.110845 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.110853 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.110861 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.110869 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.110877 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.110885 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.110892 | orchestrator | 2025-11-08 13:30:14.110901 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-11-08 13:30:14.110909 | orchestrator | Saturday 08 November 2025 13:30:07 +0000 (0:00:11.123) 0:06:01.691 ***** 2025-11-08 13:30:14.110917 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-11-08 13:30:14.110926 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-11-08 13:30:14.110934 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-11-08 13:30:14.110942 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-11-08 13:30:14.110951 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-11-08 13:30:14.110958 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-11-08 13:30:14.110965 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-11-08 13:30:14.110972 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-11-08 13:30:14.110979 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-11-08 13:30:14.110986 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-11-08 13:30:14.110992 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-11-08 13:30:14.110999 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-11-08 13:30:14.111008 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-11-08 13:30:14.111015 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-11-08 13:30:14.111025 | orchestrator | 2025-11-08 13:30:14.111033 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-11-08 13:30:14.111041 | orchestrator | Saturday 08 November 2025 13:30:08 +0000 (0:00:01.150) 0:06:02.842 ***** 2025-11-08 13:30:14.111050 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:14.111058 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:14.111065 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:14.111074 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:14.111082 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:14.111091 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:14.111099 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:14.111108 | orchestrator | 2025-11-08 13:30:14.111117 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-11-08 13:30:14.111124 | orchestrator | Saturday 08 November 2025 13:30:09 +0000 (0:00:00.520) 0:06:03.362 ***** 2025-11-08 13:30:14.111132 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:14.111140 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:14.111148 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:14.111156 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:14.111164 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:14.111172 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:14.111180 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:14.111198 | orchestrator | 2025-11-08 13:30:14.111207 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-11-08 13:30:14.111216 | orchestrator | Saturday 08 November 2025 13:30:13 +0000 (0:00:04.005) 0:06:07.367 ***** 2025-11-08 13:30:14.111224 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:14.111232 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:14.111240 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:14.111246 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:14.111254 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:14.111261 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:14.111269 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:14.111277 | orchestrator | 2025-11-08 13:30:14.111286 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-11-08 13:30:14.111294 | orchestrator | Saturday 08 November 2025 13:30:13 +0000 (0:00:00.683) 0:06:08.051 ***** 2025-11-08 13:30:14.111353 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-11-08 13:30:14.111362 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-11-08 13:30:14.111369 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:14.111377 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-11-08 13:30:14.111385 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-11-08 13:30:14.111392 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:14.111399 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-11-08 13:30:14.111407 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-11-08 13:30:14.111415 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:14.111433 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-11-08 13:30:32.857484 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-11-08 13:30:32.857602 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:32.857610 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-11-08 13:30:32.857615 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-11-08 13:30:32.857621 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:32.857625 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-11-08 13:30:32.857630 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-11-08 13:30:32.857635 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:32.857640 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-11-08 13:30:32.857645 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-11-08 13:30:32.857651 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:32.857656 | orchestrator | 2025-11-08 13:30:32.857663 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-11-08 13:30:32.857670 | orchestrator | Saturday 08 November 2025 13:30:14 +0000 (0:00:00.569) 0:06:08.621 ***** 2025-11-08 13:30:32.857675 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:32.857681 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:32.857686 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:32.857691 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:32.857696 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:32.857701 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:32.857706 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:32.857712 | orchestrator | 2025-11-08 13:30:32.857717 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-11-08 13:30:32.857722 | orchestrator | Saturday 08 November 2025 13:30:14 +0000 (0:00:00.485) 0:06:09.107 ***** 2025-11-08 13:30:32.857728 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:32.857733 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:32.857738 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:32.857743 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:32.857748 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:32.857773 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:32.857779 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:32.857784 | orchestrator | 2025-11-08 13:30:32.857789 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-11-08 13:30:32.857794 | orchestrator | Saturday 08 November 2025 13:30:15 +0000 (0:00:00.478) 0:06:09.585 ***** 2025-11-08 13:30:32.857812 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:32.857817 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:32.857823 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:32.857828 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:32.857833 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:32.857838 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:32.857842 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:32.857848 | orchestrator | 2025-11-08 13:30:32.857853 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-11-08 13:30:32.857858 | orchestrator | Saturday 08 November 2025 13:30:16 +0000 (0:00:00.710) 0:06:10.296 ***** 2025-11-08 13:30:32.857863 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:32.857869 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:32.857874 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:32.857879 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:32.857884 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:30:32.857889 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:30:32.857894 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:30:32.857899 | orchestrator | 2025-11-08 13:30:32.857904 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-11-08 13:30:32.857909 | orchestrator | Saturday 08 November 2025 13:30:17 +0000 (0:00:01.813) 0:06:12.109 ***** 2025-11-08 13:30:32.857915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:30:32.857923 | orchestrator | 2025-11-08 13:30:32.857928 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-11-08 13:30:32.857933 | orchestrator | Saturday 08 November 2025 13:30:18 +0000 (0:00:00.873) 0:06:12.983 ***** 2025-11-08 13:30:32.857938 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:32.857943 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:32.857948 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:32.857953 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:32.857958 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:32.857963 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:32.857968 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:32.857973 | orchestrator | 2025-11-08 13:30:32.857979 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-11-08 13:30:32.857984 | orchestrator | Saturday 08 November 2025 13:30:19 +0000 (0:00:00.789) 0:06:13.772 ***** 2025-11-08 13:30:32.857989 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:32.857994 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:32.857999 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:32.858004 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:32.858009 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:32.858046 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:32.858054 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:32.858060 | orchestrator | 2025-11-08 13:30:32.858065 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-11-08 13:30:32.858071 | orchestrator | Saturday 08 November 2025 13:30:20 +0000 (0:00:01.081) 0:06:14.854 ***** 2025-11-08 13:30:32.858077 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:32.858083 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:32.858088 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:32.858094 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:32.858099 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:32.858105 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:32.858115 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:32.858121 | orchestrator | 2025-11-08 13:30:32.858127 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-11-08 13:30:32.858143 | orchestrator | Saturday 08 November 2025 13:30:21 +0000 (0:00:01.244) 0:06:16.098 ***** 2025-11-08 13:30:32.858149 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:32.858155 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:32.858161 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:32.858167 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:32.858173 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:30:32.858179 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:30:32.858184 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:30:32.858190 | orchestrator | 2025-11-08 13:30:32.858196 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-11-08 13:30:32.858201 | orchestrator | Saturday 08 November 2025 13:30:23 +0000 (0:00:01.253) 0:06:17.351 ***** 2025-11-08 13:30:32.858207 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:32.858213 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:32.858218 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:32.858224 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:32.858230 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:32.858235 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:32.858241 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:32.858247 | orchestrator | 2025-11-08 13:30:32.858253 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-11-08 13:30:32.858258 | orchestrator | Saturday 08 November 2025 13:30:24 +0000 (0:00:01.375) 0:06:18.727 ***** 2025-11-08 13:30:32.858264 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:32.858269 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:32.858275 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:32.858281 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:32.858287 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:32.858292 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:32.858298 | orchestrator | changed: [testbed-manager] 2025-11-08 13:30:32.858303 | orchestrator | 2025-11-08 13:30:32.858309 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-11-08 13:30:32.858315 | orchestrator | Saturday 08 November 2025 13:30:25 +0000 (0:00:01.307) 0:06:20.035 ***** 2025-11-08 13:30:32.858321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:30:32.858327 | orchestrator | 2025-11-08 13:30:32.858332 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-11-08 13:30:32.858338 | orchestrator | Saturday 08 November 2025 13:30:26 +0000 (0:00:01.029) 0:06:21.065 ***** 2025-11-08 13:30:32.858343 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:32.858349 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:32.858355 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:32.858360 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:30:32.858366 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:30:32.858371 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:30:32.858377 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:32.858383 | orchestrator | 2025-11-08 13:30:32.858389 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-11-08 13:30:32.858394 | orchestrator | Saturday 08 November 2025 13:30:28 +0000 (0:00:01.369) 0:06:22.434 ***** 2025-11-08 13:30:32.858399 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:32.858404 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:32.858409 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:32.858414 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:30:32.858419 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:30:32.858424 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:32.858429 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:30:32.858443 | orchestrator | 2025-11-08 13:30:32.858449 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-11-08 13:30:32.858454 | orchestrator | Saturday 08 November 2025 13:30:29 +0000 (0:00:01.084) 0:06:23.519 ***** 2025-11-08 13:30:32.858459 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:32.858464 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:32.858469 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:32.858474 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:30:32.858479 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:30:32.858484 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:32.858489 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:30:32.858494 | orchestrator | 2025-11-08 13:30:32.858499 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-11-08 13:30:32.858504 | orchestrator | Saturday 08 November 2025 13:30:30 +0000 (0:00:01.281) 0:06:24.800 ***** 2025-11-08 13:30:32.858509 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:32.858514 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:32.858519 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:30:32.858524 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:32.858529 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:30:32.858534 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:32.858539 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:30:32.858545 | orchestrator | 2025-11-08 13:30:32.858550 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-11-08 13:30:32.858555 | orchestrator | Saturday 08 November 2025 13:30:31 +0000 (0:00:01.126) 0:06:25.927 ***** 2025-11-08 13:30:32.858560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:30:32.858565 | orchestrator | 2025-11-08 13:30:32.858570 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-08 13:30:32.858575 | orchestrator | Saturday 08 November 2025 13:30:32 +0000 (0:00:00.882) 0:06:26.809 ***** 2025-11-08 13:30:32.858580 | orchestrator | 2025-11-08 13:30:32.858585 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-08 13:30:32.858591 | orchestrator | Saturday 08 November 2025 13:30:32 +0000 (0:00:00.039) 0:06:26.848 ***** 2025-11-08 13:30:32.858596 | orchestrator | 2025-11-08 13:30:32.858601 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-08 13:30:32.858606 | orchestrator | Saturday 08 November 2025 13:30:32 +0000 (0:00:00.043) 0:06:26.892 ***** 2025-11-08 13:30:32.858611 | orchestrator | 2025-11-08 13:30:32.858616 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-08 13:30:32.858624 | orchestrator | Saturday 08 November 2025 13:30:32 +0000 (0:00:00.037) 0:06:26.930 ***** 2025-11-08 13:30:57.701739 | orchestrator | 2025-11-08 13:30:57.701866 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-08 13:30:57.701875 | orchestrator | Saturday 08 November 2025 13:30:32 +0000 (0:00:00.039) 0:06:26.969 ***** 2025-11-08 13:30:57.701881 | orchestrator | 2025-11-08 13:30:57.701885 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-08 13:30:57.701890 | orchestrator | Saturday 08 November 2025 13:30:32 +0000 (0:00:00.044) 0:06:27.014 ***** 2025-11-08 13:30:57.701894 | orchestrator | 2025-11-08 13:30:57.701899 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-08 13:30:57.701903 | orchestrator | Saturday 08 November 2025 13:30:32 +0000 (0:00:00.045) 0:06:27.059 ***** 2025-11-08 13:30:57.701908 | orchestrator | 2025-11-08 13:30:57.701912 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-08 13:30:57.701916 | orchestrator | Saturday 08 November 2025 13:30:32 +0000 (0:00:00.038) 0:06:27.098 ***** 2025-11-08 13:30:57.701921 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:57.701927 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:57.701931 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:57.701935 | orchestrator | 2025-11-08 13:30:57.701957 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-11-08 13:30:57.701962 | orchestrator | Saturday 08 November 2025 13:30:34 +0000 (0:00:01.202) 0:06:28.300 ***** 2025-11-08 13:30:57.701967 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:57.701972 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:57.701976 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:57.701980 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:57.701985 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:57.701989 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:57.701993 | orchestrator | changed: [testbed-manager] 2025-11-08 13:30:57.701997 | orchestrator | 2025-11-08 13:30:57.702002 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2025-11-08 13:30:57.702006 | orchestrator | Saturday 08 November 2025 13:30:35 +0000 (0:00:01.550) 0:06:29.851 ***** 2025-11-08 13:30:57.702010 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:57.702059 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:57.702065 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:57.702070 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:57.702074 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:57.702078 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:57.702083 | orchestrator | changed: [testbed-manager] 2025-11-08 13:30:57.702087 | orchestrator | 2025-11-08 13:30:57.702091 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-11-08 13:30:57.702096 | orchestrator | Saturday 08 November 2025 13:30:36 +0000 (0:00:01.253) 0:06:31.104 ***** 2025-11-08 13:30:57.702101 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:57.702105 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:57.702109 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:57.702114 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:57.702118 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:57.702122 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:57.702127 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:57.702131 | orchestrator | 2025-11-08 13:30:57.702135 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-11-08 13:30:57.702140 | orchestrator | Saturday 08 November 2025 13:30:39 +0000 (0:00:02.204) 0:06:33.309 ***** 2025-11-08 13:30:57.702144 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:57.702148 | orchestrator | 2025-11-08 13:30:57.702152 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-11-08 13:30:57.702157 | orchestrator | Saturday 08 November 2025 13:30:39 +0000 (0:00:00.086) 0:06:33.395 ***** 2025-11-08 13:30:57.702161 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:57.702166 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:57.702170 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:57.702174 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:30:57.702178 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:57.702183 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:57.702187 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:57.702191 | orchestrator | 2025-11-08 13:30:57.702196 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-11-08 13:30:57.702202 | orchestrator | Saturday 08 November 2025 13:30:40 +0000 (0:00:00.919) 0:06:34.314 ***** 2025-11-08 13:30:57.702206 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:57.702210 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:57.702215 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:57.702219 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:57.702223 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:57.702228 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:57.702232 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:57.702236 | orchestrator | 2025-11-08 13:30:57.702241 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-11-08 13:30:57.702245 | orchestrator | Saturday 08 November 2025 13:30:40 +0000 (0:00:00.729) 0:06:35.043 ***** 2025-11-08 13:30:57.702255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:30:57.702262 | orchestrator | 2025-11-08 13:30:57.702267 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-11-08 13:30:57.702271 | orchestrator | Saturday 08 November 2025 13:30:41 +0000 (0:00:00.879) 0:06:35.923 ***** 2025-11-08 13:30:57.702276 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:57.702280 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:57.702284 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:57.702289 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:30:57.702293 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:30:57.702298 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:30:57.702302 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:57.702306 | orchestrator | 2025-11-08 13:30:57.702310 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-11-08 13:30:57.702315 | orchestrator | Saturday 08 November 2025 13:30:42 +0000 (0:00:00.832) 0:06:36.755 ***** 2025-11-08 13:30:57.702319 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-11-08 13:30:57.702335 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-11-08 13:30:57.702343 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-11-08 13:30:57.702348 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-11-08 13:30:57.702352 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-11-08 13:30:57.702357 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-11-08 13:30:57.702361 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-11-08 13:30:57.702365 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-11-08 13:30:57.702370 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-11-08 13:30:57.702375 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-11-08 13:30:57.702379 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-11-08 13:30:57.702383 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-11-08 13:30:57.702388 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-11-08 13:30:57.702392 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-11-08 13:30:57.702396 | orchestrator | 2025-11-08 13:30:57.702400 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-11-08 13:30:57.702405 | orchestrator | Saturday 08 November 2025 13:30:44 +0000 (0:00:02.360) 0:06:39.116 ***** 2025-11-08 13:30:57.702409 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:57.702413 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:57.702418 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:57.702422 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:57.702426 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:57.702430 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:57.702435 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:57.702439 | orchestrator | 2025-11-08 13:30:57.702443 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-11-08 13:30:57.702447 | orchestrator | Saturday 08 November 2025 13:30:45 +0000 (0:00:00.495) 0:06:39.611 ***** 2025-11-08 13:30:57.702453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:30:57.702459 | orchestrator | 2025-11-08 13:30:57.702463 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-11-08 13:30:57.702467 | orchestrator | Saturday 08 November 2025 13:30:46 +0000 (0:00:00.820) 0:06:40.432 ***** 2025-11-08 13:30:57.702471 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:57.702476 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:57.702484 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:57.702488 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:30:57.702493 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:30:57.702497 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:30:57.702501 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:57.702506 | orchestrator | 2025-11-08 13:30:57.702510 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-11-08 13:30:57.702515 | orchestrator | Saturday 08 November 2025 13:30:47 +0000 (0:00:01.005) 0:06:41.437 ***** 2025-11-08 13:30:57.702519 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:57.702523 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:57.702527 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:57.702532 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:30:57.702536 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:30:57.702540 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:30:57.702544 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:57.702549 | orchestrator | 2025-11-08 13:30:57.702553 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-11-08 13:30:57.702557 | orchestrator | Saturday 08 November 2025 13:30:47 +0000 (0:00:00.778) 0:06:42.216 ***** 2025-11-08 13:30:57.702562 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:57.702566 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:57.702570 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:57.702574 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:57.702579 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:57.702583 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:57.702587 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:57.702591 | orchestrator | 2025-11-08 13:30:57.702596 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-11-08 13:30:57.702600 | orchestrator | Saturday 08 November 2025 13:30:48 +0000 (0:00:00.500) 0:06:42.716 ***** 2025-11-08 13:30:57.702605 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:30:57.702609 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:30:57.702613 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:30:57.702617 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:30:57.702622 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:30:57.702626 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:57.702630 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:30:57.702635 | orchestrator | 2025-11-08 13:30:57.702639 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-11-08 13:30:57.702643 | orchestrator | Saturday 08 November 2025 13:30:49 +0000 (0:00:01.421) 0:06:44.138 ***** 2025-11-08 13:30:57.702648 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:30:57.702652 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:30:57.702656 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:30:57.702660 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:30:57.702665 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:30:57.702669 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:30:57.702673 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:30:57.702677 | orchestrator | 2025-11-08 13:30:57.702682 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-11-08 13:30:57.702686 | orchestrator | Saturday 08 November 2025 13:30:50 +0000 (0:00:00.475) 0:06:44.614 ***** 2025-11-08 13:30:57.702690 | orchestrator | ok: [testbed-manager] 2025-11-08 13:30:57.702695 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:30:57.702699 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:30:57.702703 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:30:57.702708 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:30:57.702712 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:30:57.702718 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:28.285961 | orchestrator | 2025-11-08 13:31:28.286146 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-11-08 13:31:28.286166 | orchestrator | Saturday 08 November 2025 13:30:57 +0000 (0:00:07.331) 0:06:51.946 ***** 2025-11-08 13:31:28.286178 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:28.286214 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:28.286226 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:28.286237 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:28.286248 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:28.286259 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.286271 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:28.286282 | orchestrator | 2025-11-08 13:31:28.286293 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-11-08 13:31:28.286304 | orchestrator | Saturday 08 November 2025 13:30:58 +0000 (0:00:01.249) 0:06:53.196 ***** 2025-11-08 13:31:28.286315 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:28.286326 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:28.286337 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:28.286348 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.286359 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:28.286370 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:28.286380 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:28.286391 | orchestrator | 2025-11-08 13:31:28.286402 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-11-08 13:31:28.286413 | orchestrator | Saturday 08 November 2025 13:31:00 +0000 (0:00:01.577) 0:06:54.773 ***** 2025-11-08 13:31:28.286424 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:28.286435 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:28.286445 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:28.286456 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:28.286467 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:28.286477 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.286489 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:28.286501 | orchestrator | 2025-11-08 13:31:28.286514 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-08 13:31:28.286527 | orchestrator | Saturday 08 November 2025 13:31:02 +0000 (0:00:01.537) 0:06:56.310 ***** 2025-11-08 13:31:28.286540 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:28.286552 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:28.286564 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:28.286576 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:28.286588 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:28.286600 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:28.286612 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.286624 | orchestrator | 2025-11-08 13:31:28.286636 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-08 13:31:28.286649 | orchestrator | Saturday 08 November 2025 13:31:03 +0000 (0:00:01.063) 0:06:57.374 ***** 2025-11-08 13:31:28.286662 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:31:28.286681 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:31:28.286700 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:31:28.286720 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:31:28.286739 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:31:28.286788 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:31:28.286806 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:31:28.286823 | orchestrator | 2025-11-08 13:31:28.286840 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-11-08 13:31:28.286857 | orchestrator | Saturday 08 November 2025 13:31:03 +0000 (0:00:00.786) 0:06:58.161 ***** 2025-11-08 13:31:28.286875 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:31:28.286891 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:31:28.286910 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:31:28.286927 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:31:28.286938 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:31:28.286949 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:31:28.286960 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:31:28.286970 | orchestrator | 2025-11-08 13:31:28.286983 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-11-08 13:31:28.287006 | orchestrator | Saturday 08 November 2025 13:31:04 +0000 (0:00:00.512) 0:06:58.674 ***** 2025-11-08 13:31:28.287017 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:28.287028 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:28.287039 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:28.287049 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:28.287060 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:28.287070 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:28.287081 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.287092 | orchestrator | 2025-11-08 13:31:28.287102 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-11-08 13:31:28.287113 | orchestrator | Saturday 08 November 2025 13:31:04 +0000 (0:00:00.479) 0:06:59.153 ***** 2025-11-08 13:31:28.287124 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:28.287134 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:28.287145 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:28.287155 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:28.287166 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:28.287176 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:28.287187 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.287198 | orchestrator | 2025-11-08 13:31:28.287208 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-11-08 13:31:28.287219 | orchestrator | Saturday 08 November 2025 13:31:05 +0000 (0:00:00.695) 0:06:59.849 ***** 2025-11-08 13:31:28.287230 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:28.287240 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:28.287251 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:28.287262 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:28.287272 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:28.287283 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:28.287293 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.287304 | orchestrator | 2025-11-08 13:31:28.287315 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-11-08 13:31:28.287325 | orchestrator | Saturday 08 November 2025 13:31:06 +0000 (0:00:00.523) 0:07:00.373 ***** 2025-11-08 13:31:28.287336 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:28.287347 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.287357 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:28.287368 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:28.287378 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:28.287389 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:28.287399 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:28.287410 | orchestrator | 2025-11-08 13:31:28.287456 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-11-08 13:31:28.287473 | orchestrator | Saturday 08 November 2025 13:31:11 +0000 (0:00:05.491) 0:07:05.864 ***** 2025-11-08 13:31:28.287485 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:31:28.287496 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:31:28.287506 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:31:28.287517 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:31:28.287528 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:31:28.287538 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:31:28.287549 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:31:28.287560 | orchestrator | 2025-11-08 13:31:28.287570 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-11-08 13:31:28.287581 | orchestrator | Saturday 08 November 2025 13:31:12 +0000 (0:00:00.529) 0:07:06.393 ***** 2025-11-08 13:31:28.287594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:31:28.287607 | orchestrator | 2025-11-08 13:31:28.287618 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-11-08 13:31:28.287629 | orchestrator | Saturday 08 November 2025 13:31:13 +0000 (0:00:00.956) 0:07:07.350 ***** 2025-11-08 13:31:28.287647 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:28.287658 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:28.287669 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:28.287679 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:28.287690 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:28.287700 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.287711 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:28.287722 | orchestrator | 2025-11-08 13:31:28.287732 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-11-08 13:31:28.287743 | orchestrator | Saturday 08 November 2025 13:31:14 +0000 (0:00:01.707) 0:07:09.057 ***** 2025-11-08 13:31:28.287773 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:28.287784 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:28.287794 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:28.287805 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:28.287816 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:28.287826 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:28.287837 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.287848 | orchestrator | 2025-11-08 13:31:28.287859 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-11-08 13:31:28.287870 | orchestrator | Saturday 08 November 2025 13:31:15 +0000 (0:00:01.100) 0:07:10.158 ***** 2025-11-08 13:31:28.287881 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:28.287891 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:28.287902 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:28.287913 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:28.287923 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:28.287934 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:28.287945 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:28.287955 | orchestrator | 2025-11-08 13:31:28.287966 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-11-08 13:31:28.287977 | orchestrator | Saturday 08 November 2025 13:31:16 +0000 (0:00:00.848) 0:07:11.006 ***** 2025-11-08 13:31:28.287988 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-08 13:31:28.288002 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-08 13:31:28.288013 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-08 13:31:28.288024 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-08 13:31:28.288035 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-08 13:31:28.288046 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-08 13:31:28.288057 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-08 13:31:28.288068 | orchestrator | 2025-11-08 13:31:28.288079 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-11-08 13:31:28.288090 | orchestrator | Saturday 08 November 2025 13:31:18 +0000 (0:00:01.847) 0:07:12.853 ***** 2025-11-08 13:31:28.288101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:31:28.288112 | orchestrator | 2025-11-08 13:31:28.288123 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-11-08 13:31:28.288134 | orchestrator | Saturday 08 November 2025 13:31:19 +0000 (0:00:00.792) 0:07:13.645 ***** 2025-11-08 13:31:28.288145 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:28.288162 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:28.288173 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:28.288184 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:28.288195 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:28.288205 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:28.288216 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:28.288227 | orchestrator | 2025-11-08 13:31:28.288244 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-11-08 13:31:57.691025 | orchestrator | Saturday 08 November 2025 13:31:28 +0000 (0:00:08.884) 0:07:22.530 ***** 2025-11-08 13:31:57.691158 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:57.691176 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:57.691188 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:57.691200 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:57.691211 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:57.691222 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:57.691233 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:57.691244 | orchestrator | 2025-11-08 13:31:57.691257 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-11-08 13:31:57.691268 | orchestrator | Saturday 08 November 2025 13:31:30 +0000 (0:00:01.859) 0:07:24.390 ***** 2025-11-08 13:31:57.691279 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:57.691290 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:57.691301 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:57.691312 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:57.691322 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:57.691333 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:57.691344 | orchestrator | 2025-11-08 13:31:57.691355 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-11-08 13:31:57.691366 | orchestrator | Saturday 08 November 2025 13:31:31 +0000 (0:00:01.239) 0:07:25.630 ***** 2025-11-08 13:31:57.691377 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:57.691390 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:57.691401 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:57.691412 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:57.691423 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:57.691433 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:57.691445 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:57.691457 | orchestrator | 2025-11-08 13:31:57.691468 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-11-08 13:31:57.691479 | orchestrator | 2025-11-08 13:31:57.691490 | orchestrator | TASK [Include hardening role] ************************************************** 2025-11-08 13:31:57.691501 | orchestrator | Saturday 08 November 2025 13:31:32 +0000 (0:00:01.417) 0:07:27.047 ***** 2025-11-08 13:31:57.691512 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:31:57.691523 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:31:57.691534 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:31:57.691545 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:31:57.691556 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:31:57.691566 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:31:57.691577 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:31:57.691588 | orchestrator | 2025-11-08 13:31:57.691599 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-11-08 13:31:57.691610 | orchestrator | 2025-11-08 13:31:57.691621 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-11-08 13:31:57.691632 | orchestrator | Saturday 08 November 2025 13:31:33 +0000 (0:00:00.490) 0:07:27.537 ***** 2025-11-08 13:31:57.691643 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:57.691654 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:57.691665 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:57.691676 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:57.691686 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:57.691697 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:57.691708 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:57.691780 | orchestrator | 2025-11-08 13:31:57.691793 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-11-08 13:31:57.691805 | orchestrator | Saturday 08 November 2025 13:31:34 +0000 (0:00:01.255) 0:07:28.792 ***** 2025-11-08 13:31:57.691815 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:57.691826 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:57.691837 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:57.691848 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:57.691859 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:57.691870 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:57.691881 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:57.691891 | orchestrator | 2025-11-08 13:31:57.691903 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-11-08 13:31:57.691943 | orchestrator | Saturday 08 November 2025 13:31:35 +0000 (0:00:01.349) 0:07:30.142 ***** 2025-11-08 13:31:57.691954 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:31:57.691965 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:31:57.691976 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:31:57.691987 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:31:57.691997 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:31:57.692008 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:31:57.692019 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:31:57.692030 | orchestrator | 2025-11-08 13:31:57.692040 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-11-08 13:31:57.692051 | orchestrator | Saturday 08 November 2025 13:31:36 +0000 (0:00:00.690) 0:07:30.832 ***** 2025-11-08 13:31:57.692063 | orchestrator | included: osism.services.smartd for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:31:57.692076 | orchestrator | 2025-11-08 13:31:57.692087 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-11-08 13:31:57.692098 | orchestrator | Saturday 08 November 2025 13:31:37 +0000 (0:00:00.804) 0:07:31.636 ***** 2025-11-08 13:31:57.692110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:31:57.692124 | orchestrator | 2025-11-08 13:31:57.692135 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-11-08 13:31:57.692146 | orchestrator | Saturday 08 November 2025 13:31:38 +0000 (0:00:00.783) 0:07:32.420 ***** 2025-11-08 13:31:57.692157 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:57.692168 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:57.692179 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:57.692190 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:57.692201 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:57.692211 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:57.692222 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:57.692233 | orchestrator | 2025-11-08 13:31:57.692273 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-11-08 13:31:57.692286 | orchestrator | Saturday 08 November 2025 13:31:46 +0000 (0:00:08.558) 0:07:40.979 ***** 2025-11-08 13:31:57.692297 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:57.692308 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:57.692319 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:57.692330 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:57.692340 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:57.692351 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:57.692362 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:57.692373 | orchestrator | 2025-11-08 13:31:57.692384 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-11-08 13:31:57.692395 | orchestrator | Saturday 08 November 2025 13:31:47 +0000 (0:00:00.848) 0:07:41.827 ***** 2025-11-08 13:31:57.692415 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:57.692426 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:57.692437 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:57.692448 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:57.692459 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:57.692470 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:57.692480 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:57.692491 | orchestrator | 2025-11-08 13:31:57.692502 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-11-08 13:31:57.692513 | orchestrator | Saturday 08 November 2025 13:31:48 +0000 (0:00:01.276) 0:07:43.103 ***** 2025-11-08 13:31:57.692524 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:57.692535 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:57.692546 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:57.692557 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:57.692567 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:57.692578 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:57.692589 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:57.692599 | orchestrator | 2025-11-08 13:31:57.692610 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-11-08 13:31:57.692621 | orchestrator | Saturday 08 November 2025 13:31:50 +0000 (0:00:01.883) 0:07:44.986 ***** 2025-11-08 13:31:57.692632 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:57.692643 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:57.692653 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:57.692664 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:57.692675 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:57.692685 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:57.692696 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:57.692707 | orchestrator | 2025-11-08 13:31:57.692718 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-11-08 13:31:57.692745 | orchestrator | Saturday 08 November 2025 13:31:51 +0000 (0:00:01.170) 0:07:46.157 ***** 2025-11-08 13:31:57.692757 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:57.692768 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:57.692778 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:57.692789 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:57.692800 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:57.692811 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:57.692822 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:57.692833 | orchestrator | 2025-11-08 13:31:57.692844 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-11-08 13:31:57.692855 | orchestrator | 2025-11-08 13:31:57.692866 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-11-08 13:31:57.692877 | orchestrator | Saturday 08 November 2025 13:31:53 +0000 (0:00:01.111) 0:07:47.268 ***** 2025-11-08 13:31:57.692888 | orchestrator | included: osism.commons.state for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:31:57.692900 | orchestrator | 2025-11-08 13:31:57.692911 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-11-08 13:31:57.692922 | orchestrator | Saturday 08 November 2025 13:31:53 +0000 (0:00:00.925) 0:07:48.194 ***** 2025-11-08 13:31:57.692933 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:57.692944 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:57.692955 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:57.692966 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:57.692976 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:57.692987 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:57.692999 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:57.693009 | orchestrator | 2025-11-08 13:31:57.693020 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-11-08 13:31:57.693031 | orchestrator | Saturday 08 November 2025 13:31:54 +0000 (0:00:00.827) 0:07:49.022 ***** 2025-11-08 13:31:57.693049 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:57.693060 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:57.693071 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:57.693082 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:57.693093 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:57.693104 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:57.693115 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:57.693125 | orchestrator | 2025-11-08 13:31:57.693137 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-11-08 13:31:57.693148 | orchestrator | Saturday 08 November 2025 13:31:55 +0000 (0:00:01.153) 0:07:50.176 ***** 2025-11-08 13:31:57.693159 | orchestrator | included: osism.commons.state for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-08 13:31:57.693170 | orchestrator | 2025-11-08 13:31:57.693181 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-11-08 13:31:57.693192 | orchestrator | Saturday 08 November 2025 13:31:56 +0000 (0:00:00.971) 0:07:51.147 ***** 2025-11-08 13:31:57.693203 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:31:57.693214 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:31:57.693225 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:31:57.693235 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:31:57.693246 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:31:57.693257 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:31:57.693268 | orchestrator | ok: [testbed-manager] 2025-11-08 13:31:57.693279 | orchestrator | 2025-11-08 13:31:57.693298 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-11-08 13:31:59.244314 | orchestrator | Saturday 08 November 2025 13:31:57 +0000 (0:00:00.788) 0:07:51.936 ***** 2025-11-08 13:31:59.244431 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:31:59.244449 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:31:59.244462 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:31:59.244473 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:31:59.244484 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:31:59.244495 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:31:59.244506 | orchestrator | changed: [testbed-manager] 2025-11-08 13:31:59.244517 | orchestrator | 2025-11-08 13:31:59.244529 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:31:59.244542 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-11-08 13:31:59.244555 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-08 13:31:59.244566 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-11-08 13:31:59.244577 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-11-08 13:31:59.244589 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-08 13:31:59.244600 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-08 13:31:59.244611 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-08 13:31:59.244621 | orchestrator | 2025-11-08 13:31:59.244632 | orchestrator | 2025-11-08 13:31:59.244643 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:31:59.244654 | orchestrator | Saturday 08 November 2025 13:31:58 +0000 (0:00:01.116) 0:07:53.053 ***** 2025-11-08 13:31:59.244665 | orchestrator | =============================================================================== 2025-11-08 13:31:59.244703 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.95s 2025-11-08 13:31:59.244714 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.17s 2025-11-08 13:31:59.244754 | orchestrator | osism.commons.packages : Download required packages -------------------- 29.23s 2025-11-08 13:31:59.244765 | orchestrator | osism.commons.repository : Update package cache ------------------------ 20.54s 2025-11-08 13:31:59.244777 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.29s 2025-11-08 13:31:59.244790 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.20s 2025-11-08 13:31:59.244803 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.12s 2025-11-08 13:31:59.244816 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.99s 2025-11-08 13:31:59.244829 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.98s 2025-11-08 13:31:59.244841 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.88s 2025-11-08 13:31:59.244854 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.81s 2025-11-08 13:31:59.244866 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.63s 2025-11-08 13:31:59.244878 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.56s 2025-11-08 13:31:59.244891 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.13s 2025-11-08 13:31:59.244903 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.86s 2025-11-08 13:31:59.244915 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.33s 2025-11-08 13:31:59.244927 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.95s 2025-11-08 13:31:59.244939 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.96s 2025-11-08 13:31:59.244951 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.69s 2025-11-08 13:31:59.244964 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.49s 2025-11-08 13:31:59.525482 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-11-08 13:31:59.525838 | orchestrator | + osism apply network 2025-11-08 13:32:12.240581 | orchestrator | 2025-11-08 13:32:12 | INFO  | Task 91822052-d71f-4720-99b4-1faee0d39856 (network) was prepared for execution. 2025-11-08 13:32:12.240701 | orchestrator | 2025-11-08 13:32:12 | INFO  | It takes a moment until task 91822052-d71f-4720-99b4-1faee0d39856 (network) has been started and output is visible here. 2025-11-08 13:32:39.865161 | orchestrator | 2025-11-08 13:32:39.865227 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-11-08 13:32:39.865241 | orchestrator | 2025-11-08 13:32:39.865252 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-11-08 13:32:39.865263 | orchestrator | Saturday 08 November 2025 13:32:16 +0000 (0:00:00.253) 0:00:00.253 ***** 2025-11-08 13:32:39.865274 | orchestrator | ok: [testbed-manager] 2025-11-08 13:32:39.865285 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:32:39.865295 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:32:39.865305 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:32:39.865324 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:32:39.865335 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:32:39.865345 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:32:39.865355 | orchestrator | 2025-11-08 13:32:39.865365 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-11-08 13:32:39.865375 | orchestrator | Saturday 08 November 2025 13:32:17 +0000 (0:00:00.697) 0:00:00.950 ***** 2025-11-08 13:32:39.865386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:32:39.865400 | orchestrator | 2025-11-08 13:32:39.865482 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-11-08 13:32:39.865494 | orchestrator | Saturday 08 November 2025 13:32:18 +0000 (0:00:01.153) 0:00:02.103 ***** 2025-11-08 13:32:39.865503 | orchestrator | ok: [testbed-manager] 2025-11-08 13:32:39.865513 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:32:39.865523 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:32:39.865532 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:32:39.865542 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:32:39.865551 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:32:39.865561 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:32:39.865570 | orchestrator | 2025-11-08 13:32:39.865580 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-11-08 13:32:39.865590 | orchestrator | Saturday 08 November 2025 13:32:20 +0000 (0:00:01.942) 0:00:04.046 ***** 2025-11-08 13:32:39.865600 | orchestrator | ok: [testbed-manager] 2025-11-08 13:32:39.865610 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:32:39.865619 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:32:39.865629 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:32:39.865639 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:32:39.865648 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:32:39.865658 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:32:39.865667 | orchestrator | 2025-11-08 13:32:39.865677 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-11-08 13:32:39.865687 | orchestrator | Saturday 08 November 2025 13:32:22 +0000 (0:00:01.693) 0:00:05.740 ***** 2025-11-08 13:32:39.865726 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-11-08 13:32:39.865739 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-11-08 13:32:39.865750 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-11-08 13:32:39.865761 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-11-08 13:32:39.865772 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-11-08 13:32:39.865783 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-11-08 13:32:39.865793 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-11-08 13:32:39.865804 | orchestrator | 2025-11-08 13:32:39.865815 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-11-08 13:32:39.865825 | orchestrator | Saturday 08 November 2025 13:32:23 +0000 (0:00:00.956) 0:00:06.696 ***** 2025-11-08 13:32:39.865837 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 13:32:39.865848 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-08 13:32:39.865858 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 13:32:39.865868 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-08 13:32:39.865879 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-08 13:32:39.865889 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-08 13:32:39.865900 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-08 13:32:39.865910 | orchestrator | 2025-11-08 13:32:39.865921 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-11-08 13:32:39.865931 | orchestrator | Saturday 08 November 2025 13:32:26 +0000 (0:00:03.075) 0:00:09.772 ***** 2025-11-08 13:32:39.865942 | orchestrator | changed: [testbed-manager] 2025-11-08 13:32:39.865952 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:32:39.865961 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:32:39.865970 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:32:39.865980 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:32:39.865989 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:32:39.865998 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:32:39.866008 | orchestrator | 2025-11-08 13:32:39.866064 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-11-08 13:32:39.866078 | orchestrator | Saturday 08 November 2025 13:32:27 +0000 (0:00:01.607) 0:00:11.379 ***** 2025-11-08 13:32:39.866087 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-08 13:32:39.866097 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 13:32:39.866107 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 13:32:39.866135 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-08 13:32:39.866144 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-08 13:32:39.866154 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-08 13:32:39.866164 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-08 13:32:39.866174 | orchestrator | 2025-11-08 13:32:39.866183 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-11-08 13:32:39.866193 | orchestrator | Saturday 08 November 2025 13:32:29 +0000 (0:00:01.649) 0:00:13.029 ***** 2025-11-08 13:32:39.866203 | orchestrator | ok: [testbed-manager] 2025-11-08 13:32:39.866212 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:32:39.866222 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:32:39.866232 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:32:39.866241 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:32:39.866251 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:32:39.866261 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:32:39.866270 | orchestrator | 2025-11-08 13:32:39.866280 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-11-08 13:32:39.866302 | orchestrator | Saturday 08 November 2025 13:32:30 +0000 (0:00:01.105) 0:00:14.134 ***** 2025-11-08 13:32:39.866312 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:32:39.866322 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:32:39.866332 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:32:39.866341 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:32:39.866351 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:32:39.866360 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:32:39.866370 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:32:39.866379 | orchestrator | 2025-11-08 13:32:39.866389 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-11-08 13:32:39.866399 | orchestrator | Saturday 08 November 2025 13:32:31 +0000 (0:00:00.638) 0:00:14.772 ***** 2025-11-08 13:32:39.866409 | orchestrator | ok: [testbed-manager] 2025-11-08 13:32:39.866418 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:32:39.866428 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:32:39.866438 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:32:39.866447 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:32:39.866457 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:32:39.866466 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:32:39.866476 | orchestrator | 2025-11-08 13:32:39.866486 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-11-08 13:32:39.866495 | orchestrator | Saturday 08 November 2025 13:32:33 +0000 (0:00:02.115) 0:00:16.888 ***** 2025-11-08 13:32:39.866505 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:32:39.866514 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:32:39.866524 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:32:39.866533 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:32:39.866543 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:32:39.866552 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:32:39.866562 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-11-08 13:32:39.866573 | orchestrator | 2025-11-08 13:32:39.866583 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-11-08 13:32:39.866593 | orchestrator | Saturday 08 November 2025 13:32:34 +0000 (0:00:00.863) 0:00:17.752 ***** 2025-11-08 13:32:39.866603 | orchestrator | ok: [testbed-manager] 2025-11-08 13:32:39.866612 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:32:39.866622 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:32:39.866631 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:32:39.866641 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:32:39.866650 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:32:39.866660 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:32:39.866669 | orchestrator | 2025-11-08 13:32:39.866679 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-11-08 13:32:39.866689 | orchestrator | Saturday 08 November 2025 13:32:35 +0000 (0:00:01.624) 0:00:19.377 ***** 2025-11-08 13:32:39.866726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:32:39.866740 | orchestrator | 2025-11-08 13:32:39.866749 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-11-08 13:32:39.866759 | orchestrator | Saturday 08 November 2025 13:32:36 +0000 (0:00:01.204) 0:00:20.581 ***** 2025-11-08 13:32:39.866769 | orchestrator | ok: [testbed-manager] 2025-11-08 13:32:39.866778 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:32:39.866788 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:32:39.866797 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:32:39.866807 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:32:39.866816 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:32:39.866825 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:32:39.866835 | orchestrator | 2025-11-08 13:32:39.866844 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-11-08 13:32:39.866854 | orchestrator | Saturday 08 November 2025 13:32:37 +0000 (0:00:00.936) 0:00:21.517 ***** 2025-11-08 13:32:39.866863 | orchestrator | ok: [testbed-manager] 2025-11-08 13:32:39.866873 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:32:39.866882 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:32:39.866891 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:32:39.866901 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:32:39.866910 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:32:39.866919 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:32:39.866929 | orchestrator | 2025-11-08 13:32:39.866938 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-11-08 13:32:39.866948 | orchestrator | Saturday 08 November 2025 13:32:38 +0000 (0:00:00.836) 0:00:22.354 ***** 2025-11-08 13:32:39.866957 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-11-08 13:32:39.866967 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-11-08 13:32:39.866977 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-11-08 13:32:39.866986 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-11-08 13:32:39.866995 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-08 13:32:39.867005 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-11-08 13:32:39.867022 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-08 13:32:39.867032 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-11-08 13:32:39.867042 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-08 13:32:39.867052 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-08 13:32:39.867062 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-08 13:32:39.867071 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-11-08 13:32:39.867081 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-08 13:32:39.867091 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-08 13:32:39.867100 | orchestrator | 2025-11-08 13:32:39.867115 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-11-08 13:32:55.696742 | orchestrator | Saturday 08 November 2025 13:32:39 +0000 (0:00:01.178) 0:00:23.532 ***** 2025-11-08 13:32:55.696873 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:32:55.696893 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:32:55.696905 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:32:55.696916 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:32:55.696928 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:32:55.696938 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:32:55.696986 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:32:55.696999 | orchestrator | 2025-11-08 13:32:55.697011 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-11-08 13:32:55.697023 | orchestrator | Saturday 08 November 2025 13:32:40 +0000 (0:00:00.626) 0:00:24.159 ***** 2025-11-08 13:32:55.697036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-2, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-5, testbed-node-3 2025-11-08 13:32:55.697049 | orchestrator | 2025-11-08 13:32:55.697061 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-11-08 13:32:55.697072 | orchestrator | Saturday 08 November 2025 13:32:44 +0000 (0:00:04.344) 0:00:28.503 ***** 2025-11-08 13:32:55.697084 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697110 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697145 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697168 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697290 | orchestrator | 2025-11-08 13:32:55.697303 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-11-08 13:32:55.697332 | orchestrator | Saturday 08 November 2025 13:32:49 +0000 (0:00:05.177) 0:00:33.681 ***** 2025-11-08 13:32:55.697345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697358 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697416 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-11-08 13:32:55.697450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697468 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:32:55.697503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:33:01.412948 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-11-08 13:33:01.413054 | orchestrator | 2025-11-08 13:33:01.413071 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-11-08 13:33:01.413086 | orchestrator | Saturday 08 November 2025 13:32:55 +0000 (0:00:05.681) 0:00:39.362 ***** 2025-11-08 13:33:01.413098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:33:01.413111 | orchestrator | 2025-11-08 13:33:01.413123 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-11-08 13:33:01.413134 | orchestrator | Saturday 08 November 2025 13:32:56 +0000 (0:00:01.062) 0:00:40.424 ***** 2025-11-08 13:33:01.413146 | orchestrator | ok: [testbed-manager] 2025-11-08 13:33:01.413158 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:33:01.413170 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:33:01.413181 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:33:01.413192 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:33:01.413203 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:33:01.413214 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:33:01.413225 | orchestrator | 2025-11-08 13:33:01.413236 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-11-08 13:33:01.413247 | orchestrator | Saturday 08 November 2025 13:32:57 +0000 (0:00:01.059) 0:00:41.484 ***** 2025-11-08 13:33:01.413258 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-08 13:33:01.413270 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-08 13:33:01.413281 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-08 13:33:01.413293 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-08 13:33:01.413304 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:33:01.413316 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-08 13:33:01.413327 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-08 13:33:01.413338 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-08 13:33:01.413349 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-08 13:33:01.413361 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:33:01.413372 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-08 13:33:01.413383 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-08 13:33:01.413394 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-08 13:33:01.413427 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-08 13:33:01.413439 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:33:01.413450 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-08 13:33:01.413461 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-08 13:33:01.413474 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-08 13:33:01.413486 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-08 13:33:01.413499 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:33:01.413512 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-08 13:33:01.413525 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-08 13:33:01.413537 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-08 13:33:01.413550 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-08 13:33:01.413562 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-08 13:33:01.413575 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-08 13:33:01.413588 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-08 13:33:01.413600 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-08 13:33:01.413611 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:33:01.413622 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:33:01.413633 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-08 13:33:01.413644 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-08 13:33:01.413655 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-08 13:33:01.413666 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-08 13:33:01.413677 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:33:01.413688 | orchestrator | 2025-11-08 13:33:01.413699 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-11-08 13:33:01.413752 | orchestrator | Saturday 08 November 2025 13:32:59 +0000 (0:00:01.952) 0:00:43.437 ***** 2025-11-08 13:33:01.413764 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:33:01.413775 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:33:01.413786 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:33:01.413797 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:33:01.413814 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:33:01.413825 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:33:01.413836 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:33:01.413847 | orchestrator | 2025-11-08 13:33:01.413859 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-11-08 13:33:01.413870 | orchestrator | Saturday 08 November 2025 13:33:00 +0000 (0:00:00.607) 0:00:44.044 ***** 2025-11-08 13:33:01.413881 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:33:01.413892 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:33:01.413903 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:33:01.413914 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:33:01.413925 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:33:01.413935 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:33:01.413946 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:33:01.413957 | orchestrator | 2025-11-08 13:33:01.413968 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:33:01.413980 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 13:33:01.414002 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 13:33:01.414066 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 13:33:01.414081 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 13:33:01.414093 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 13:33:01.414104 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 13:33:01.414115 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 13:33:01.414126 | orchestrator | 2025-11-08 13:33:01.414137 | orchestrator | 2025-11-08 13:33:01.414149 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:33:01.414160 | orchestrator | Saturday 08 November 2025 13:33:01 +0000 (0:00:00.681) 0:00:44.726 ***** 2025-11-08 13:33:01.414171 | orchestrator | =============================================================================== 2025-11-08 13:33:01.414182 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.68s 2025-11-08 13:33:01.414193 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.18s 2025-11-08 13:33:01.414204 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.34s 2025-11-08 13:33:01.414215 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.08s 2025-11-08 13:33:01.414227 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2025-11-08 13:33:01.414238 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.95s 2025-11-08 13:33:01.414249 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.94s 2025-11-08 13:33:01.414260 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.69s 2025-11-08 13:33:01.414271 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.65s 2025-11-08 13:33:01.414282 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.62s 2025-11-08 13:33:01.414293 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.61s 2025-11-08 13:33:01.414304 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.20s 2025-11-08 13:33:01.414315 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.18s 2025-11-08 13:33:01.414326 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.15s 2025-11-08 13:33:01.414337 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2025-11-08 13:33:01.414348 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.06s 2025-11-08 13:33:01.414359 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.06s 2025-11-08 13:33:01.414370 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2025-11-08 13:33:01.414381 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.94s 2025-11-08 13:33:01.414392 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.86s 2025-11-08 13:33:01.691392 | orchestrator | + osism apply wireguard 2025-11-08 13:33:13.742233 | orchestrator | 2025-11-08 13:33:13 | INFO  | Task 2f08f7e3-0bf5-4746-b295-0bdc61708b42 (wireguard) was prepared for execution. 2025-11-08 13:33:13.742347 | orchestrator | 2025-11-08 13:33:13 | INFO  | It takes a moment until task 2f08f7e3-0bf5-4746-b295-0bdc61708b42 (wireguard) has been started and output is visible here. 2025-11-08 13:33:34.945841 | orchestrator | 2025-11-08 13:33:34.945956 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-11-08 13:33:34.945975 | orchestrator | 2025-11-08 13:33:34.946004 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-11-08 13:33:34.946069 | orchestrator | Saturday 08 November 2025 13:33:17 +0000 (0:00:00.212) 0:00:00.212 ***** 2025-11-08 13:33:34.946085 | orchestrator | ok: [testbed-manager] 2025-11-08 13:33:34.946099 | orchestrator | 2025-11-08 13:33:34.946110 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-11-08 13:33:34.946121 | orchestrator | Saturday 08 November 2025 13:33:19 +0000 (0:00:01.462) 0:00:01.675 ***** 2025-11-08 13:33:34.946133 | orchestrator | changed: [testbed-manager] 2025-11-08 13:33:34.946144 | orchestrator | 2025-11-08 13:33:34.946155 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-11-08 13:33:34.946167 | orchestrator | Saturday 08 November 2025 13:33:27 +0000 (0:00:08.189) 0:00:09.865 ***** 2025-11-08 13:33:34.946178 | orchestrator | changed: [testbed-manager] 2025-11-08 13:33:34.946189 | orchestrator | 2025-11-08 13:33:34.946200 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-11-08 13:33:34.946211 | orchestrator | Saturday 08 November 2025 13:33:28 +0000 (0:00:00.535) 0:00:10.400 ***** 2025-11-08 13:33:34.946222 | orchestrator | changed: [testbed-manager] 2025-11-08 13:33:34.946233 | orchestrator | 2025-11-08 13:33:34.946244 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-11-08 13:33:34.946255 | orchestrator | Saturday 08 November 2025 13:33:28 +0000 (0:00:00.418) 0:00:10.818 ***** 2025-11-08 13:33:34.946266 | orchestrator | ok: [testbed-manager] 2025-11-08 13:33:34.946277 | orchestrator | 2025-11-08 13:33:34.946287 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-11-08 13:33:34.946298 | orchestrator | Saturday 08 November 2025 13:33:29 +0000 (0:00:00.650) 0:00:11.469 ***** 2025-11-08 13:33:34.946309 | orchestrator | ok: [testbed-manager] 2025-11-08 13:33:34.946320 | orchestrator | 2025-11-08 13:33:34.946331 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-11-08 13:33:34.946344 | orchestrator | Saturday 08 November 2025 13:33:29 +0000 (0:00:00.405) 0:00:11.875 ***** 2025-11-08 13:33:34.946356 | orchestrator | ok: [testbed-manager] 2025-11-08 13:33:34.946368 | orchestrator | 2025-11-08 13:33:34.946381 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-11-08 13:33:34.946394 | orchestrator | Saturday 08 November 2025 13:33:29 +0000 (0:00:00.415) 0:00:12.291 ***** 2025-11-08 13:33:34.946406 | orchestrator | changed: [testbed-manager] 2025-11-08 13:33:34.946419 | orchestrator | 2025-11-08 13:33:34.946431 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-11-08 13:33:34.946443 | orchestrator | Saturday 08 November 2025 13:33:31 +0000 (0:00:01.166) 0:00:13.457 ***** 2025-11-08 13:33:34.946456 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-08 13:33:34.946469 | orchestrator | changed: [testbed-manager] 2025-11-08 13:33:34.946481 | orchestrator | 2025-11-08 13:33:34.946494 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-11-08 13:33:34.946506 | orchestrator | Saturday 08 November 2025 13:33:32 +0000 (0:00:00.875) 0:00:14.333 ***** 2025-11-08 13:33:34.946518 | orchestrator | changed: [testbed-manager] 2025-11-08 13:33:34.946531 | orchestrator | 2025-11-08 13:33:34.946543 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-11-08 13:33:34.946555 | orchestrator | Saturday 08 November 2025 13:33:33 +0000 (0:00:01.675) 0:00:16.008 ***** 2025-11-08 13:33:34.946567 | orchestrator | changed: [testbed-manager] 2025-11-08 13:33:34.946579 | orchestrator | 2025-11-08 13:33:34.946592 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:33:34.946620 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:33:34.946645 | orchestrator | 2025-11-08 13:33:34.946683 | orchestrator | 2025-11-08 13:33:34.946695 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:33:34.946738 | orchestrator | Saturday 08 November 2025 13:33:34 +0000 (0:00:00.944) 0:00:16.953 ***** 2025-11-08 13:33:34.946759 | orchestrator | =============================================================================== 2025-11-08 13:33:34.946777 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 8.19s 2025-11-08 13:33:34.946794 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.68s 2025-11-08 13:33:34.946806 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.46s 2025-11-08 13:33:34.946816 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2025-11-08 13:33:34.946827 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-11-08 13:33:34.946838 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2025-11-08 13:33:34.946849 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.65s 2025-11-08 13:33:34.946860 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-11-08 13:33:34.946871 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-11-08 13:33:34.946882 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-11-08 13:33:34.946893 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2025-11-08 13:33:35.225121 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-11-08 13:33:35.260999 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-11-08 13:33:35.261042 | orchestrator | Dload Upload Total Spent Left Speed 2025-11-08 13:33:35.334970 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 202 0 --:--:-- --:--:-- --:--:-- 205 2025-11-08 13:33:35.350570 | orchestrator | + osism apply --environment custom workarounds 2025-11-08 13:33:37.245185 | orchestrator | 2025-11-08 13:33:37 | INFO  | Trying to run play workarounds in environment custom 2025-11-08 13:33:47.388261 | orchestrator | 2025-11-08 13:33:47 | INFO  | Task 40b38b02-51ce-41c0-aa10-c6efcc2c2e6c (workarounds) was prepared for execution. 2025-11-08 13:33:47.388348 | orchestrator | 2025-11-08 13:33:47 | INFO  | It takes a moment until task 40b38b02-51ce-41c0-aa10-c6efcc2c2e6c (workarounds) has been started and output is visible here. 2025-11-08 13:34:11.549936 | orchestrator | 2025-11-08 13:34:11.550127 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:34:11.550150 | orchestrator | 2025-11-08 13:34:11.550162 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-11-08 13:34:11.550175 | orchestrator | Saturday 08 November 2025 13:33:51 +0000 (0:00:00.127) 0:00:00.127 ***** 2025-11-08 13:34:11.550186 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-11-08 13:34:11.550198 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-11-08 13:34:11.550209 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-11-08 13:34:11.550220 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-11-08 13:34:11.550231 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-11-08 13:34:11.550241 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-11-08 13:34:11.550252 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-11-08 13:34:11.550263 | orchestrator | 2025-11-08 13:34:11.550274 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-11-08 13:34:11.550285 | orchestrator | 2025-11-08 13:34:11.550295 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-11-08 13:34:11.550306 | orchestrator | Saturday 08 November 2025 13:33:52 +0000 (0:00:00.763) 0:00:00.891 ***** 2025-11-08 13:34:11.550344 | orchestrator | ok: [testbed-manager] 2025-11-08 13:34:11.550357 | orchestrator | 2025-11-08 13:34:11.550367 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-11-08 13:34:11.550378 | orchestrator | 2025-11-08 13:34:11.550389 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-11-08 13:34:11.550401 | orchestrator | Saturday 08 November 2025 13:33:54 +0000 (0:00:02.374) 0:00:03.266 ***** 2025-11-08 13:34:11.550412 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:34:11.550423 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:34:11.550434 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:34:11.550444 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:34:11.550455 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:34:11.550465 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:34:11.550476 | orchestrator | 2025-11-08 13:34:11.550486 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-11-08 13:34:11.550497 | orchestrator | 2025-11-08 13:34:11.550508 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-11-08 13:34:11.550519 | orchestrator | Saturday 08 November 2025 13:33:56 +0000 (0:00:01.786) 0:00:05.052 ***** 2025-11-08 13:34:11.550544 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-08 13:34:11.550557 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-08 13:34:11.550568 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-08 13:34:11.550579 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-08 13:34:11.550590 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-08 13:34:11.550601 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-08 13:34:11.550612 | orchestrator | 2025-11-08 13:34:11.550623 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-11-08 13:34:11.550634 | orchestrator | Saturday 08 November 2025 13:33:57 +0000 (0:00:01.463) 0:00:06.516 ***** 2025-11-08 13:34:11.550645 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:34:11.550657 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:34:11.550668 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:34:11.550678 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:34:11.550689 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:34:11.550719 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:34:11.550731 | orchestrator | 2025-11-08 13:34:11.550742 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-11-08 13:34:11.550753 | orchestrator | Saturday 08 November 2025 13:34:01 +0000 (0:00:03.677) 0:00:10.193 ***** 2025-11-08 13:34:11.550764 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:34:11.550775 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:34:11.550786 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:34:11.550797 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:34:11.550808 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:34:11.550818 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:34:11.550829 | orchestrator | 2025-11-08 13:34:11.550840 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-11-08 13:34:11.550851 | orchestrator | 2025-11-08 13:34:11.550862 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-11-08 13:34:11.550873 | orchestrator | Saturday 08 November 2025 13:34:02 +0000 (0:00:00.650) 0:00:10.844 ***** 2025-11-08 13:34:11.550884 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:34:11.550895 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:34:11.550906 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:34:11.550917 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:34:11.550927 | orchestrator | changed: [testbed-manager] 2025-11-08 13:34:11.550947 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:34:11.550958 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:34:11.550968 | orchestrator | 2025-11-08 13:34:11.550979 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-11-08 13:34:11.550991 | orchestrator | Saturday 08 November 2025 13:34:03 +0000 (0:00:01.463) 0:00:12.308 ***** 2025-11-08 13:34:11.551007 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:34:11.551018 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:34:11.551029 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:34:11.551040 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:34:11.551051 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:34:11.551061 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:34:11.551090 | orchestrator | changed: [testbed-manager] 2025-11-08 13:34:11.551101 | orchestrator | 2025-11-08 13:34:11.551112 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-11-08 13:34:11.551124 | orchestrator | Saturday 08 November 2025 13:34:05 +0000 (0:00:01.496) 0:00:13.804 ***** 2025-11-08 13:34:11.551135 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:34:11.551145 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:34:11.551156 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:34:11.551167 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:34:11.551177 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:34:11.551188 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:34:11.551199 | orchestrator | ok: [testbed-manager] 2025-11-08 13:34:11.551209 | orchestrator | 2025-11-08 13:34:11.551220 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-11-08 13:34:11.551231 | orchestrator | Saturday 08 November 2025 13:34:06 +0000 (0:00:01.539) 0:00:15.343 ***** 2025-11-08 13:34:11.551242 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:34:11.551253 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:34:11.551263 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:34:11.551274 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:34:11.551285 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:34:11.551312 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:34:11.551324 | orchestrator | changed: [testbed-manager] 2025-11-08 13:34:11.551335 | orchestrator | 2025-11-08 13:34:11.551345 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-11-08 13:34:11.551356 | orchestrator | Saturday 08 November 2025 13:34:08 +0000 (0:00:01.761) 0:00:17.105 ***** 2025-11-08 13:34:11.551367 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:34:11.551377 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:34:11.551388 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:34:11.551399 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:34:11.551409 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:34:11.551420 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:34:11.551431 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:34:11.551442 | orchestrator | 2025-11-08 13:34:11.551452 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-11-08 13:34:11.551463 | orchestrator | 2025-11-08 13:34:11.551474 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-11-08 13:34:11.551485 | orchestrator | Saturday 08 November 2025 13:34:08 +0000 (0:00:00.604) 0:00:17.709 ***** 2025-11-08 13:34:11.551496 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:34:11.551507 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:34:11.551518 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:34:11.551528 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:34:11.551539 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:34:11.551549 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:34:11.551560 | orchestrator | ok: [testbed-manager] 2025-11-08 13:34:11.551570 | orchestrator | 2025-11-08 13:34:11.551581 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:34:11.551593 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:34:11.551613 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:11.551624 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:11.551635 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:11.551646 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:11.551657 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:11.551667 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:11.551678 | orchestrator | 2025-11-08 13:34:11.551689 | orchestrator | 2025-11-08 13:34:11.551735 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:34:11.551748 | orchestrator | Saturday 08 November 2025 13:34:11 +0000 (0:00:02.549) 0:00:20.258 ***** 2025-11-08 13:34:11.551759 | orchestrator | =============================================================================== 2025-11-08 13:34:11.551770 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.68s 2025-11-08 13:34:11.551781 | orchestrator | Install python3-docker -------------------------------------------------- 2.55s 2025-11-08 13:34:11.551791 | orchestrator | Apply netplan configuration --------------------------------------------- 2.37s 2025-11-08 13:34:11.551802 | orchestrator | Apply netplan configuration --------------------------------------------- 1.79s 2025-11-08 13:34:11.551813 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.76s 2025-11-08 13:34:11.551824 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.54s 2025-11-08 13:34:11.551834 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.50s 2025-11-08 13:34:11.551850 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.46s 2025-11-08 13:34:11.551861 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2025-11-08 13:34:11.551872 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2025-11-08 13:34:11.551883 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2025-11-08 13:34:11.551901 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2025-11-08 13:34:12.148017 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-11-08 13:34:24.096196 | orchestrator | 2025-11-08 13:34:24 | INFO  | Task 6329506d-96f0-4220-83ec-a18a770dadd3 (reboot) was prepared for execution. 2025-11-08 13:34:24.096313 | orchestrator | 2025-11-08 13:34:24 | INFO  | It takes a moment until task 6329506d-96f0-4220-83ec-a18a770dadd3 (reboot) has been started and output is visible here. 2025-11-08 13:34:33.385548 | orchestrator | 2025-11-08 13:34:33.385692 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-08 13:34:33.385748 | orchestrator | 2025-11-08 13:34:33.385801 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-08 13:34:33.385816 | orchestrator | Saturday 08 November 2025 13:34:27 +0000 (0:00:00.178) 0:00:00.178 ***** 2025-11-08 13:34:33.385827 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:34:33.385839 | orchestrator | 2025-11-08 13:34:33.385851 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-08 13:34:33.385863 | orchestrator | Saturday 08 November 2025 13:34:27 +0000 (0:00:00.090) 0:00:00.269 ***** 2025-11-08 13:34:33.385874 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:34:33.385911 | orchestrator | 2025-11-08 13:34:33.385923 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-08 13:34:33.385935 | orchestrator | Saturday 08 November 2025 13:34:28 +0000 (0:00:00.855) 0:00:01.125 ***** 2025-11-08 13:34:33.385945 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:34:33.385956 | orchestrator | 2025-11-08 13:34:33.385968 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-08 13:34:33.385979 | orchestrator | 2025-11-08 13:34:33.385990 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-08 13:34:33.386001 | orchestrator | Saturday 08 November 2025 13:34:28 +0000 (0:00:00.090) 0:00:01.215 ***** 2025-11-08 13:34:33.386012 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:34:33.386070 | orchestrator | 2025-11-08 13:34:33.386083 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-08 13:34:33.386095 | orchestrator | Saturday 08 November 2025 13:34:28 +0000 (0:00:00.087) 0:00:01.303 ***** 2025-11-08 13:34:33.386107 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:34:33.386120 | orchestrator | 2025-11-08 13:34:33.386133 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-08 13:34:33.386145 | orchestrator | Saturday 08 November 2025 13:34:29 +0000 (0:00:00.647) 0:00:01.950 ***** 2025-11-08 13:34:33.386158 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:34:33.386170 | orchestrator | 2025-11-08 13:34:33.386182 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-08 13:34:33.386194 | orchestrator | 2025-11-08 13:34:33.386206 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-08 13:34:33.386219 | orchestrator | Saturday 08 November 2025 13:34:29 +0000 (0:00:00.104) 0:00:02.054 ***** 2025-11-08 13:34:33.386231 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:34:33.386243 | orchestrator | 2025-11-08 13:34:33.386255 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-08 13:34:33.386268 | orchestrator | Saturday 08 November 2025 13:34:29 +0000 (0:00:00.195) 0:00:02.250 ***** 2025-11-08 13:34:33.386280 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:34:33.386293 | orchestrator | 2025-11-08 13:34:33.386306 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-08 13:34:33.386318 | orchestrator | Saturday 08 November 2025 13:34:30 +0000 (0:00:00.668) 0:00:02.919 ***** 2025-11-08 13:34:33.386331 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:34:33.386344 | orchestrator | 2025-11-08 13:34:33.386357 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-08 13:34:33.386369 | orchestrator | 2025-11-08 13:34:33.386382 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-08 13:34:33.386394 | orchestrator | Saturday 08 November 2025 13:34:30 +0000 (0:00:00.098) 0:00:03.017 ***** 2025-11-08 13:34:33.386407 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:34:33.386419 | orchestrator | 2025-11-08 13:34:33.386432 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-08 13:34:33.386443 | orchestrator | Saturday 08 November 2025 13:34:30 +0000 (0:00:00.100) 0:00:03.118 ***** 2025-11-08 13:34:33.386454 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:34:33.386465 | orchestrator | 2025-11-08 13:34:33.386476 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-08 13:34:33.386487 | orchestrator | Saturday 08 November 2025 13:34:31 +0000 (0:00:00.645) 0:00:03.764 ***** 2025-11-08 13:34:33.386499 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:34:33.386510 | orchestrator | 2025-11-08 13:34:33.386521 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-08 13:34:33.386532 | orchestrator | 2025-11-08 13:34:33.386543 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-08 13:34:33.386554 | orchestrator | Saturday 08 November 2025 13:34:31 +0000 (0:00:00.101) 0:00:03.865 ***** 2025-11-08 13:34:33.386565 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:34:33.386585 | orchestrator | 2025-11-08 13:34:33.386596 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-08 13:34:33.386607 | orchestrator | Saturday 08 November 2025 13:34:31 +0000 (0:00:00.092) 0:00:03.958 ***** 2025-11-08 13:34:33.386618 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:34:33.386629 | orchestrator | 2025-11-08 13:34:33.386640 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-08 13:34:33.386664 | orchestrator | Saturday 08 November 2025 13:34:32 +0000 (0:00:00.642) 0:00:04.600 ***** 2025-11-08 13:34:33.386676 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:34:33.386687 | orchestrator | 2025-11-08 13:34:33.386720 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-08 13:34:33.386732 | orchestrator | 2025-11-08 13:34:33.386744 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-08 13:34:33.386754 | orchestrator | Saturday 08 November 2025 13:34:32 +0000 (0:00:00.131) 0:00:04.732 ***** 2025-11-08 13:34:33.386765 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:34:33.386777 | orchestrator | 2025-11-08 13:34:33.386788 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-08 13:34:33.386799 | orchestrator | Saturday 08 November 2025 13:34:32 +0000 (0:00:00.090) 0:00:04.822 ***** 2025-11-08 13:34:33.386810 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:34:33.386821 | orchestrator | 2025-11-08 13:34:33.386832 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-08 13:34:33.386843 | orchestrator | Saturday 08 November 2025 13:34:33 +0000 (0:00:00.638) 0:00:05.461 ***** 2025-11-08 13:34:33.386872 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:34:33.386884 | orchestrator | 2025-11-08 13:34:33.386895 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:34:33.386907 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:33.386920 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:33.386931 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:33.386942 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:33.386953 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:33.386964 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:34:33.386975 | orchestrator | 2025-11-08 13:34:33.386986 | orchestrator | 2025-11-08 13:34:33.386997 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:34:33.387008 | orchestrator | Saturday 08 November 2025 13:34:33 +0000 (0:00:00.033) 0:00:05.495 ***** 2025-11-08 13:34:33.387020 | orchestrator | =============================================================================== 2025-11-08 13:34:33.387031 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.10s 2025-11-08 13:34:33.387042 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.66s 2025-11-08 13:34:33.387053 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.56s 2025-11-08 13:34:33.576343 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-11-08 13:34:45.418148 | orchestrator | 2025-11-08 13:34:45 | INFO  | Task b75158fc-cfe6-43b4-8f75-3d6d3c69f460 (wait-for-connection) was prepared for execution. 2025-11-08 13:34:45.418284 | orchestrator | 2025-11-08 13:34:45 | INFO  | It takes a moment until task b75158fc-cfe6-43b4-8f75-3d6d3c69f460 (wait-for-connection) has been started and output is visible here. 2025-11-08 13:35:01.490218 | orchestrator | 2025-11-08 13:35:01.490345 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-11-08 13:35:01.490362 | orchestrator | 2025-11-08 13:35:01.490374 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-11-08 13:35:01.490386 | orchestrator | Saturday 08 November 2025 13:34:49 +0000 (0:00:00.224) 0:00:00.224 ***** 2025-11-08 13:35:01.490397 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:35:01.490410 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:35:01.490426 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:35:01.490445 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:35:01.490464 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:35:01.490482 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:35:01.490500 | orchestrator | 2025-11-08 13:35:01.490521 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:35:01.490542 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:35:01.490557 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:35:01.490569 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:35:01.490580 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:35:01.490591 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:35:01.490602 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:35:01.490613 | orchestrator | 2025-11-08 13:35:01.490625 | orchestrator | 2025-11-08 13:35:01.490688 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:35:01.490730 | orchestrator | Saturday 08 November 2025 13:35:01 +0000 (0:00:11.518) 0:00:11.742 ***** 2025-11-08 13:35:01.490743 | orchestrator | =============================================================================== 2025-11-08 13:35:01.490756 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2025-11-08 13:35:01.771658 | orchestrator | + osism apply hddtemp 2025-11-08 13:35:13.789771 | orchestrator | 2025-11-08 13:35:13 | INFO  | Task a1f1e066-2390-4876-b7c1-d73737f6b298 (hddtemp) was prepared for execution. 2025-11-08 13:35:13.789891 | orchestrator | 2025-11-08 13:35:13 | INFO  | It takes a moment until task a1f1e066-2390-4876-b7c1-d73737f6b298 (hddtemp) has been started and output is visible here. 2025-11-08 13:35:41.569315 | orchestrator | 2025-11-08 13:35:41.569414 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-11-08 13:35:41.569426 | orchestrator | 2025-11-08 13:35:41.569434 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-11-08 13:35:41.569442 | orchestrator | Saturday 08 November 2025 13:35:17 +0000 (0:00:00.248) 0:00:00.248 ***** 2025-11-08 13:35:41.569449 | orchestrator | ok: [testbed-manager] 2025-11-08 13:35:41.569458 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:35:41.569464 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:35:41.569471 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:35:41.569478 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:35:41.569485 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:35:41.569492 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:35:41.569499 | orchestrator | 2025-11-08 13:35:41.569505 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-11-08 13:35:41.569512 | orchestrator | Saturday 08 November 2025 13:35:18 +0000 (0:00:00.676) 0:00:00.924 ***** 2025-11-08 13:35:41.569533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:35:41.569571 | orchestrator | 2025-11-08 13:35:41.569579 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-11-08 13:35:41.569585 | orchestrator | Saturday 08 November 2025 13:35:19 +0000 (0:00:01.144) 0:00:02.069 ***** 2025-11-08 13:35:41.569592 | orchestrator | ok: [testbed-manager] 2025-11-08 13:35:41.569599 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:35:41.569606 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:35:41.569612 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:35:41.569619 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:35:41.569625 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:35:41.569632 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:35:41.569639 | orchestrator | 2025-11-08 13:35:41.569646 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-11-08 13:35:41.569652 | orchestrator | Saturday 08 November 2025 13:35:21 +0000 (0:00:01.936) 0:00:04.006 ***** 2025-11-08 13:35:41.569659 | orchestrator | changed: [testbed-manager] 2025-11-08 13:35:41.569667 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:35:41.569673 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:35:41.569680 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:35:41.569687 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:35:41.569693 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:35:41.569733 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:35:41.569740 | orchestrator | 2025-11-08 13:35:41.569747 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-11-08 13:35:41.569754 | orchestrator | Saturday 08 November 2025 13:35:22 +0000 (0:00:01.160) 0:00:05.166 ***** 2025-11-08 13:35:41.569761 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:35:41.569768 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:35:41.569774 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:35:41.569781 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:35:41.569787 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:35:41.569795 | orchestrator | ok: [testbed-manager] 2025-11-08 13:35:41.569801 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:35:41.569808 | orchestrator | 2025-11-08 13:35:41.569815 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-11-08 13:35:41.569821 | orchestrator | Saturday 08 November 2025 13:35:23 +0000 (0:00:01.109) 0:00:06.276 ***** 2025-11-08 13:35:41.569828 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:35:41.569835 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:35:41.569841 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:35:41.569848 | orchestrator | changed: [testbed-manager] 2025-11-08 13:35:41.569855 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:35:41.569861 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:35:41.569868 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:35:41.569874 | orchestrator | 2025-11-08 13:35:41.569881 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-11-08 13:35:41.569888 | orchestrator | Saturday 08 November 2025 13:35:24 +0000 (0:00:00.789) 0:00:07.066 ***** 2025-11-08 13:35:41.569894 | orchestrator | changed: [testbed-manager] 2025-11-08 13:35:41.569901 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:35:41.569907 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:35:41.569914 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:35:41.569920 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:35:41.569927 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:35:41.569934 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:35:41.569940 | orchestrator | 2025-11-08 13:35:41.569947 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-11-08 13:35:41.569953 | orchestrator | Saturday 08 November 2025 13:35:38 +0000 (0:00:13.318) 0:00:20.384 ***** 2025-11-08 13:35:41.569960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:35:41.569973 | orchestrator | 2025-11-08 13:35:41.569980 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-11-08 13:35:41.569999 | orchestrator | Saturday 08 November 2025 13:35:39 +0000 (0:00:01.237) 0:00:21.621 ***** 2025-11-08 13:35:41.570006 | orchestrator | changed: [testbed-manager] 2025-11-08 13:35:41.570013 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:35:41.570057 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:35:41.570064 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:35:41.570071 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:35:41.570078 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:35:41.570085 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:35:41.570091 | orchestrator | 2025-11-08 13:35:41.570098 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:35:41.570105 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:35:41.570126 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:35:41.570134 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:35:41.570141 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:35:41.570148 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:35:41.570155 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:35:41.570162 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:35:41.570168 | orchestrator | 2025-11-08 13:35:41.570175 | orchestrator | 2025-11-08 13:35:41.570182 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:35:41.570189 | orchestrator | Saturday 08 November 2025 13:35:41 +0000 (0:00:01.851) 0:00:23.473 ***** 2025-11-08 13:35:41.570195 | orchestrator | =============================================================================== 2025-11-08 13:35:41.570202 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.32s 2025-11-08 13:35:41.570209 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2025-11-08 13:35:41.570216 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.85s 2025-11-08 13:35:41.570222 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.24s 2025-11-08 13:35:41.570229 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.16s 2025-11-08 13:35:41.570236 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.14s 2025-11-08 13:35:41.570242 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.11s 2025-11-08 13:35:41.570249 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.79s 2025-11-08 13:35:41.570256 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2025-11-08 13:35:41.836511 | orchestrator | ++ semver latest 7.1.1 2025-11-08 13:35:41.895141 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-08 13:35:41.895211 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-08 13:35:41.895225 | orchestrator | + sudo systemctl restart manager.service 2025-11-08 13:36:07.150306 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-08 13:36:07.150441 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-11-08 13:36:07.150456 | orchestrator | + local max_attempts=60 2025-11-08 13:36:07.150501 | orchestrator | + local name=ceph-ansible 2025-11-08 13:36:07.150512 | orchestrator | + local attempt_num=1 2025-11-08 13:36:07.150523 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:07.184111 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:07.184177 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:07.184190 | orchestrator | + sleep 5 2025-11-08 13:36:12.190673 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:12.223845 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:12.223904 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:12.223911 | orchestrator | + sleep 5 2025-11-08 13:36:17.228040 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:17.256952 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:17.257010 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:17.257023 | orchestrator | + sleep 5 2025-11-08 13:36:22.260337 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:22.293189 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:22.293283 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:22.293298 | orchestrator | + sleep 5 2025-11-08 13:36:27.296017 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:27.328665 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:27.328733 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:27.328752 | orchestrator | + sleep 5 2025-11-08 13:36:32.330865 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:32.371905 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:32.371978 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:32.371993 | orchestrator | + sleep 5 2025-11-08 13:36:37.377161 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:37.415938 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:37.415994 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:37.416008 | orchestrator | + sleep 5 2025-11-08 13:36:42.420212 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:42.460218 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:42.460288 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:42.460301 | orchestrator | + sleep 5 2025-11-08 13:36:47.462833 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:47.499045 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:47.499144 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:47.499156 | orchestrator | + sleep 5 2025-11-08 13:36:52.502304 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:52.538670 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:52.538812 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:52.538830 | orchestrator | + sleep 5 2025-11-08 13:36:57.544225 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:36:57.579312 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-08 13:36:57.579397 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:36:57.579411 | orchestrator | + sleep 5 2025-11-08 13:37:02.584885 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:37:02.627204 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-08 13:37:02.627300 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:37:02.627314 | orchestrator | + sleep 5 2025-11-08 13:37:07.632174 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:37:07.674676 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-08 13:37:07.674822 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-08 13:37:07.674835 | orchestrator | + sleep 5 2025-11-08 13:37:12.679250 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-08 13:37:12.714373 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:37:12.714426 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-11-08 13:37:12.714439 | orchestrator | + local max_attempts=60 2025-11-08 13:37:12.714452 | orchestrator | + local name=kolla-ansible 2025-11-08 13:37:12.714464 | orchestrator | + local attempt_num=1 2025-11-08 13:37:12.715909 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-11-08 13:37:12.749845 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:37:12.749871 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-11-08 13:37:12.749883 | orchestrator | + local max_attempts=60 2025-11-08 13:37:12.749894 | orchestrator | + local name=osism-ansible 2025-11-08 13:37:12.749905 | orchestrator | + local attempt_num=1 2025-11-08 13:37:12.750532 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-11-08 13:37:12.778891 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-08 13:37:12.778923 | orchestrator | + [[ true == \t\r\u\e ]] 2025-11-08 13:37:12.778935 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-11-08 13:37:12.941623 | orchestrator | ARA in ceph-ansible already disabled. 2025-11-08 13:37:13.108243 | orchestrator | ARA in kolla-ansible already disabled. 2025-11-08 13:37:13.267547 | orchestrator | ARA in osism-ansible already disabled. 2025-11-08 13:37:13.428543 | orchestrator | ARA in osism-kubernetes already disabled. 2025-11-08 13:37:13.429009 | orchestrator | + osism apply gather-facts 2025-11-08 13:37:32.296352 | orchestrator | 2025-11-08 13:37:32 | INFO  | Task d2e47c4e-e722-4c4e-884d-958ae5b25aaf (gather-facts) was prepared for execution. 2025-11-08 13:37:32.296475 | orchestrator | 2025-11-08 13:37:32 | INFO  | It takes a moment until task d2e47c4e-e722-4c4e-884d-958ae5b25aaf (gather-facts) has been started and output is visible here. 2025-11-08 13:37:45.344670 | orchestrator | 2025-11-08 13:37:45.344785 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-08 13:37:45.344794 | orchestrator | 2025-11-08 13:37:45.344800 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-08 13:37:45.344806 | orchestrator | Saturday 08 November 2025 13:37:36 +0000 (0:00:00.192) 0:00:00.192 ***** 2025-11-08 13:37:45.344812 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:37:45.344819 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:37:45.344825 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:37:45.344830 | orchestrator | ok: [testbed-manager] 2025-11-08 13:37:45.344835 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:37:45.344840 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:37:45.344846 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:37:45.344851 | orchestrator | 2025-11-08 13:37:45.344856 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-08 13:37:45.344862 | orchestrator | 2025-11-08 13:37:45.344867 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-08 13:37:45.344872 | orchestrator | Saturday 08 November 2025 13:37:44 +0000 (0:00:08.689) 0:00:08.881 ***** 2025-11-08 13:37:45.344877 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:37:45.344884 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:37:45.344889 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:37:45.344894 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:37:45.344900 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:37:45.344905 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:37:45.344910 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:37:45.344915 | orchestrator | 2025-11-08 13:37:45.344921 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:37:45.344926 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:37:45.344933 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:37:45.344938 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:37:45.344944 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:37:45.344949 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:37:45.344976 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:37:45.344982 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 13:37:45.344987 | orchestrator | 2025-11-08 13:37:45.344992 | orchestrator | 2025-11-08 13:37:45.344997 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:37:45.345002 | orchestrator | Saturday 08 November 2025 13:37:45 +0000 (0:00:00.386) 0:00:09.267 ***** 2025-11-08 13:37:45.345020 | orchestrator | =============================================================================== 2025-11-08 13:37:45.345025 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.69s 2025-11-08 13:37:45.345031 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.39s 2025-11-08 13:37:45.576935 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-11-08 13:37:45.594079 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-11-08 13:37:45.613276 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-11-08 13:37:45.627888 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-11-08 13:37:45.639625 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-11-08 13:37:45.650371 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-11-08 13:37:45.660631 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-11-08 13:37:45.671524 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-11-08 13:37:45.681975 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-11-08 13:37:45.693478 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-11-08 13:37:45.704890 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-11-08 13:37:45.728637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-11-08 13:37:45.739048 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-11-08 13:37:45.749130 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-11-08 13:37:45.759270 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-11-08 13:37:45.773306 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-11-08 13:37:45.784362 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-11-08 13:37:45.796898 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-11-08 13:37:45.812203 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-11-08 13:37:45.824933 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-11-08 13:37:45.835960 | orchestrator | + [[ false == \t\r\u\e ]] 2025-11-08 13:37:45.948186 | orchestrator | ok: Runtime: 0:23:48.215078 2025-11-08 13:37:46.036665 | 2025-11-08 13:37:46.036939 | TASK [Deploy services] 2025-11-08 13:37:46.574285 | orchestrator | skipping: Conditional result was False 2025-11-08 13:37:46.591741 | 2025-11-08 13:37:46.591906 | TASK [Deploy in a nutshell] 2025-11-08 13:37:47.316353 | orchestrator | + set -e 2025-11-08 13:37:47.317980 | orchestrator | 2025-11-08 13:37:47.318003 | orchestrator | # PULL IMAGES 2025-11-08 13:37:47.318011 | orchestrator | 2025-11-08 13:37:47.318045 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-08 13:37:47.318060 | orchestrator | ++ export INTERACTIVE=false 2025-11-08 13:37:47.318070 | orchestrator | ++ INTERACTIVE=false 2025-11-08 13:37:47.318101 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-08 13:37:47.318116 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-08 13:37:47.318125 | orchestrator | + source /opt/manager-vars.sh 2025-11-08 13:37:47.318132 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-08 13:37:47.318144 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-08 13:37:47.318150 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-08 13:37:47.318161 | orchestrator | ++ CEPH_VERSION=reef 2025-11-08 13:37:47.318167 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-08 13:37:47.318179 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-08 13:37:47.318185 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-08 13:37:47.318194 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-08 13:37:47.318200 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-08 13:37:47.318207 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-08 13:37:47.318214 | orchestrator | ++ export ARA=false 2025-11-08 13:37:47.318220 | orchestrator | ++ ARA=false 2025-11-08 13:37:47.318226 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-08 13:37:47.318232 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-08 13:37:47.318239 | orchestrator | ++ export TEMPEST=false 2025-11-08 13:37:47.318245 | orchestrator | ++ TEMPEST=false 2025-11-08 13:37:47.318251 | orchestrator | ++ export IS_ZUUL=true 2025-11-08 13:37:47.318257 | orchestrator | ++ IS_ZUUL=true 2025-11-08 13:37:47.318263 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 13:37:47.318270 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 13:37:47.318276 | orchestrator | ++ export EXTERNAL_API=false 2025-11-08 13:37:47.318282 | orchestrator | ++ EXTERNAL_API=false 2025-11-08 13:37:47.318288 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-08 13:37:47.318294 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-08 13:37:47.318301 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-08 13:37:47.318307 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-08 13:37:47.318313 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-08 13:37:47.318319 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-08 13:37:47.318325 | orchestrator | + echo 2025-11-08 13:37:47.319577 | orchestrator | + echo '# PULL IMAGES' 2025-11-08 13:37:47.319604 | orchestrator | + echo 2025-11-08 13:37:47.319621 | orchestrator | ++ semver latest 7.0.0 2025-11-08 13:37:47.378913 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-08 13:37:47.378981 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-08 13:37:47.378995 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-11-08 13:37:49.202632 | orchestrator | 2025-11-08 13:37:49 | INFO  | Trying to run play pull-images in environment custom 2025-11-08 13:37:59.442910 | orchestrator | 2025-11-08 13:37:59 | INFO  | Task f0c9834c-01cc-497e-a47c-ec39b070ba24 (pull-images) was prepared for execution. 2025-11-08 13:37:59.443034 | orchestrator | 2025-11-08 13:37:59 | INFO  | Task f0c9834c-01cc-497e-a47c-ec39b070ba24 is running in background. No more output. Check ARA for logs. 2025-11-08 13:38:01.733809 | orchestrator | 2025-11-08 13:38:01 | INFO  | Trying to run play wipe-partitions in environment custom 2025-11-08 13:38:11.814901 | orchestrator | 2025-11-08 13:38:11 | INFO  | Task 6bfc44c3-0398-4903-8e45-6dc5576a824c (wipe-partitions) was prepared for execution. 2025-11-08 13:38:11.815036 | orchestrator | 2025-11-08 13:38:11 | INFO  | It takes a moment until task 6bfc44c3-0398-4903-8e45-6dc5576a824c (wipe-partitions) has been started and output is visible here. 2025-11-08 13:38:24.454101 | orchestrator | 2025-11-08 13:38:24.454222 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-11-08 13:38:24.454240 | orchestrator | 2025-11-08 13:38:24.454253 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-11-08 13:38:24.454270 | orchestrator | Saturday 08 November 2025 13:38:16 +0000 (0:00:00.125) 0:00:00.125 ***** 2025-11-08 13:38:24.454281 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:38:24.454294 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:38:24.454306 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:38:24.454317 | orchestrator | 2025-11-08 13:38:24.454329 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-11-08 13:38:24.454366 | orchestrator | Saturday 08 November 2025 13:38:16 +0000 (0:00:00.588) 0:00:00.713 ***** 2025-11-08 13:38:24.454378 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:38:24.454389 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:38:24.454405 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:38:24.454416 | orchestrator | 2025-11-08 13:38:24.454427 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-11-08 13:38:24.454438 | orchestrator | Saturday 08 November 2025 13:38:17 +0000 (0:00:00.356) 0:00:01.070 ***** 2025-11-08 13:38:24.454449 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:38:24.454461 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:38:24.454472 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:38:24.454483 | orchestrator | 2025-11-08 13:38:24.454494 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-11-08 13:38:24.454504 | orchestrator | Saturday 08 November 2025 13:38:17 +0000 (0:00:00.553) 0:00:01.624 ***** 2025-11-08 13:38:24.454522 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:38:24.454541 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:38:24.454560 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:38:24.454579 | orchestrator | 2025-11-08 13:38:24.454597 | orchestrator | TASK [Check device availability] *********************************************** 2025-11-08 13:38:24.454617 | orchestrator | Saturday 08 November 2025 13:38:18 +0000 (0:00:00.246) 0:00:01.870 ***** 2025-11-08 13:38:24.454637 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-11-08 13:38:24.454662 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-11-08 13:38:24.454683 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-11-08 13:38:24.454734 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-11-08 13:38:24.454753 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-11-08 13:38:24.454771 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-11-08 13:38:24.454791 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-11-08 13:38:24.454810 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-11-08 13:38:24.454831 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-11-08 13:38:24.454845 | orchestrator | 2025-11-08 13:38:24.454857 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-11-08 13:38:24.454871 | orchestrator | Saturday 08 November 2025 13:38:19 +0000 (0:00:01.213) 0:00:03.084 ***** 2025-11-08 13:38:24.454883 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-11-08 13:38:24.454896 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-11-08 13:38:24.454907 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-11-08 13:38:24.454918 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-11-08 13:38:24.454928 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-11-08 13:38:24.454939 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-11-08 13:38:24.454950 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-11-08 13:38:24.454961 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-11-08 13:38:24.454971 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-11-08 13:38:24.454982 | orchestrator | 2025-11-08 13:38:24.454993 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-11-08 13:38:24.455005 | orchestrator | Saturday 08 November 2025 13:38:20 +0000 (0:00:01.507) 0:00:04.591 ***** 2025-11-08 13:38:24.455024 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-11-08 13:38:24.455041 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-11-08 13:38:24.455057 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-11-08 13:38:24.455075 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-11-08 13:38:24.455092 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-11-08 13:38:24.455111 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-11-08 13:38:24.455130 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-11-08 13:38:24.455163 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-11-08 13:38:24.455183 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-11-08 13:38:24.455194 | orchestrator | 2025-11-08 13:38:24.455205 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-11-08 13:38:24.455215 | orchestrator | Saturday 08 November 2025 13:38:22 +0000 (0:00:02.129) 0:00:06.721 ***** 2025-11-08 13:38:24.455226 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:38:24.455237 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:38:24.455247 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:38:24.455258 | orchestrator | 2025-11-08 13:38:24.455268 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-11-08 13:38:24.455279 | orchestrator | Saturday 08 November 2025 13:38:23 +0000 (0:00:00.576) 0:00:07.298 ***** 2025-11-08 13:38:24.455290 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:38:24.455301 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:38:24.455311 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:38:24.455322 | orchestrator | 2025-11-08 13:38:24.455333 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:38:24.455347 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:38:24.455360 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:38:24.455395 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:38:24.455406 | orchestrator | 2025-11-08 13:38:24.455417 | orchestrator | 2025-11-08 13:38:24.455428 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:38:24.455439 | orchestrator | Saturday 08 November 2025 13:38:24 +0000 (0:00:00.635) 0:00:07.933 ***** 2025-11-08 13:38:24.455449 | orchestrator | =============================================================================== 2025-11-08 13:38:24.455460 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2025-11-08 13:38:24.455471 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.51s 2025-11-08 13:38:24.455482 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2025-11-08 13:38:24.455492 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2025-11-08 13:38:24.455503 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-11-08 13:38:24.455514 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-11-08 13:38:24.455525 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2025-11-08 13:38:24.455535 | orchestrator | Remove all rook related logical devices --------------------------------- 0.36s 2025-11-08 13:38:24.455546 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-11-08 13:38:36.667831 | orchestrator | 2025-11-08 13:38:36 | INFO  | Task f4d1e8c8-9c31-4a58-b0cc-da3d4ad52924 (facts) was prepared for execution. 2025-11-08 13:38:36.667962 | orchestrator | 2025-11-08 13:38:36 | INFO  | It takes a moment until task f4d1e8c8-9c31-4a58-b0cc-da3d4ad52924 (facts) has been started and output is visible here. 2025-11-08 13:38:49.429215 | orchestrator | 2025-11-08 13:38:49.429353 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-08 13:38:49.429370 | orchestrator | 2025-11-08 13:38:49.429382 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-08 13:38:49.429394 | orchestrator | Saturday 08 November 2025 13:38:40 +0000 (0:00:00.254) 0:00:00.254 ***** 2025-11-08 13:38:49.429406 | orchestrator | ok: [testbed-manager] 2025-11-08 13:38:49.429418 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:38:49.429429 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:38:49.429472 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:38:49.429484 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:38:49.429495 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:38:49.429505 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:38:49.429516 | orchestrator | 2025-11-08 13:38:49.429527 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-08 13:38:49.429538 | orchestrator | Saturday 08 November 2025 13:38:41 +0000 (0:00:01.195) 0:00:01.450 ***** 2025-11-08 13:38:49.429549 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:38:49.429561 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:38:49.429572 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:38:49.429582 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:38:49.429593 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:38:49.429604 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:38:49.429614 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:38:49.429625 | orchestrator | 2025-11-08 13:38:49.429636 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-08 13:38:49.429647 | orchestrator | 2025-11-08 13:38:49.429679 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-08 13:38:49.429736 | orchestrator | Saturday 08 November 2025 13:38:43 +0000 (0:00:01.202) 0:00:02.652 ***** 2025-11-08 13:38:49.429749 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:38:49.429761 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:38:49.429774 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:38:49.429786 | orchestrator | ok: [testbed-manager] 2025-11-08 13:38:49.429797 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:38:49.429809 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:38:49.429821 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:38:49.429833 | orchestrator | 2025-11-08 13:38:49.429845 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-08 13:38:49.429857 | orchestrator | 2025-11-08 13:38:49.429870 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-08 13:38:49.429882 | orchestrator | Saturday 08 November 2025 13:38:48 +0000 (0:00:05.370) 0:00:08.023 ***** 2025-11-08 13:38:49.429894 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:38:49.429907 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:38:49.429919 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:38:49.429931 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:38:49.429943 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:38:49.429954 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:38:49.429966 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:38:49.429978 | orchestrator | 2025-11-08 13:38:49.429991 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:38:49.430004 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:38:49.430076 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:38:49.430091 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:38:49.430104 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:38:49.430116 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:38:49.430127 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:38:49.430138 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:38:49.430149 | orchestrator | 2025-11-08 13:38:49.430170 | orchestrator | 2025-11-08 13:38:49.430181 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:38:49.430191 | orchestrator | Saturday 08 November 2025 13:38:49 +0000 (0:00:00.506) 0:00:08.529 ***** 2025-11-08 13:38:49.430202 | orchestrator | =============================================================================== 2025-11-08 13:38:49.430213 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.37s 2025-11-08 13:38:49.430224 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-11-08 13:38:49.430235 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.20s 2025-11-08 13:38:49.430246 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-11-08 13:38:51.677558 | orchestrator | 2025-11-08 13:38:51 | INFO  | Task b2f7714f-7129-4cee-a848-3bdb8400b667 (ceph-configure-lvm-volumes) was prepared for execution. 2025-11-08 13:38:51.677654 | orchestrator | 2025-11-08 13:38:51 | INFO  | It takes a moment until task b2f7714f-7129-4cee-a848-3bdb8400b667 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-11-08 13:39:03.006086 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-08 13:39:03.006159 | orchestrator | 2.16.14 2025-11-08 13:39:03.006167 | orchestrator | 2025-11-08 13:39:03.006172 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-08 13:39:03.006178 | orchestrator | 2025-11-08 13:39:03.006182 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-08 13:39:03.006187 | orchestrator | Saturday 08 November 2025 13:38:55 +0000 (0:00:00.324) 0:00:00.324 ***** 2025-11-08 13:39:03.006192 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 13:39:03.006197 | orchestrator | 2025-11-08 13:39:03.006201 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-08 13:39:03.006205 | orchestrator | Saturday 08 November 2025 13:38:56 +0000 (0:00:00.249) 0:00:00.573 ***** 2025-11-08 13:39:03.006209 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:39:03.006214 | orchestrator | 2025-11-08 13:39:03.006218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006222 | orchestrator | Saturday 08 November 2025 13:38:56 +0000 (0:00:00.223) 0:00:00.797 ***** 2025-11-08 13:39:03.006227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-11-08 13:39:03.006242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-11-08 13:39:03.006246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-11-08 13:39:03.006250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-11-08 13:39:03.006254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-11-08 13:39:03.006258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-11-08 13:39:03.006262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-11-08 13:39:03.006266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-11-08 13:39:03.006270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-11-08 13:39:03.006274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-11-08 13:39:03.006278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-11-08 13:39:03.006282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-11-08 13:39:03.006286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-11-08 13:39:03.006289 | orchestrator | 2025-11-08 13:39:03.006293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006312 | orchestrator | Saturday 08 November 2025 13:38:56 +0000 (0:00:00.448) 0:00:01.246 ***** 2025-11-08 13:39:03.006316 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006320 | orchestrator | 2025-11-08 13:39:03.006324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006328 | orchestrator | Saturday 08 November 2025 13:38:57 +0000 (0:00:00.200) 0:00:01.446 ***** 2025-11-08 13:39:03.006332 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006335 | orchestrator | 2025-11-08 13:39:03.006339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006343 | orchestrator | Saturday 08 November 2025 13:38:57 +0000 (0:00:00.200) 0:00:01.646 ***** 2025-11-08 13:39:03.006347 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006351 | orchestrator | 2025-11-08 13:39:03.006354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006361 | orchestrator | Saturday 08 November 2025 13:38:57 +0000 (0:00:00.201) 0:00:01.847 ***** 2025-11-08 13:39:03.006365 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006369 | orchestrator | 2025-11-08 13:39:03.006373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006376 | orchestrator | Saturday 08 November 2025 13:38:57 +0000 (0:00:00.220) 0:00:02.068 ***** 2025-11-08 13:39:03.006380 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006384 | orchestrator | 2025-11-08 13:39:03.006388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006392 | orchestrator | Saturday 08 November 2025 13:38:57 +0000 (0:00:00.195) 0:00:02.263 ***** 2025-11-08 13:39:03.006395 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006399 | orchestrator | 2025-11-08 13:39:03.006403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006407 | orchestrator | Saturday 08 November 2025 13:38:58 +0000 (0:00:00.197) 0:00:02.461 ***** 2025-11-08 13:39:03.006410 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006414 | orchestrator | 2025-11-08 13:39:03.006418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006422 | orchestrator | Saturday 08 November 2025 13:38:58 +0000 (0:00:00.216) 0:00:02.678 ***** 2025-11-08 13:39:03.006425 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006429 | orchestrator | 2025-11-08 13:39:03.006433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006437 | orchestrator | Saturday 08 November 2025 13:38:58 +0000 (0:00:00.197) 0:00:02.876 ***** 2025-11-08 13:39:03.006441 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8) 2025-11-08 13:39:03.006446 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8) 2025-11-08 13:39:03.006450 | orchestrator | 2025-11-08 13:39:03.006454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006467 | orchestrator | Saturday 08 November 2025 13:38:58 +0000 (0:00:00.382) 0:00:03.258 ***** 2025-11-08 13:39:03.006471 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20) 2025-11-08 13:39:03.006477 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20) 2025-11-08 13:39:03.006481 | orchestrator | 2025-11-08 13:39:03.006485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006489 | orchestrator | Saturday 08 November 2025 13:38:59 +0000 (0:00:00.623) 0:00:03.882 ***** 2025-11-08 13:39:03.006493 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f) 2025-11-08 13:39:03.006497 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f) 2025-11-08 13:39:03.006500 | orchestrator | 2025-11-08 13:39:03.006504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006512 | orchestrator | Saturday 08 November 2025 13:39:00 +0000 (0:00:00.611) 0:00:04.493 ***** 2025-11-08 13:39:03.006516 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b) 2025-11-08 13:39:03.006520 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b) 2025-11-08 13:39:03.006523 | orchestrator | 2025-11-08 13:39:03.006527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:03.006531 | orchestrator | Saturday 08 November 2025 13:39:00 +0000 (0:00:00.800) 0:00:05.294 ***** 2025-11-08 13:39:03.006535 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-08 13:39:03.006539 | orchestrator | 2025-11-08 13:39:03.006542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:03.006546 | orchestrator | Saturday 08 November 2025 13:39:01 +0000 (0:00:00.339) 0:00:05.633 ***** 2025-11-08 13:39:03.006550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-11-08 13:39:03.006554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-11-08 13:39:03.006557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-11-08 13:39:03.006561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-11-08 13:39:03.006565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-11-08 13:39:03.006569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-11-08 13:39:03.006573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-11-08 13:39:03.006576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-11-08 13:39:03.006580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-11-08 13:39:03.006584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-11-08 13:39:03.006588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-11-08 13:39:03.006592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-11-08 13:39:03.006595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-11-08 13:39:03.006599 | orchestrator | 2025-11-08 13:39:03.006603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:03.006607 | orchestrator | Saturday 08 November 2025 13:39:01 +0000 (0:00:00.372) 0:00:06.006 ***** 2025-11-08 13:39:03.006610 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006614 | orchestrator | 2025-11-08 13:39:03.006618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:03.006622 | orchestrator | Saturday 08 November 2025 13:39:01 +0000 (0:00:00.199) 0:00:06.205 ***** 2025-11-08 13:39:03.006625 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006629 | orchestrator | 2025-11-08 13:39:03.006633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:03.006637 | orchestrator | Saturday 08 November 2025 13:39:02 +0000 (0:00:00.215) 0:00:06.421 ***** 2025-11-08 13:39:03.006641 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006644 | orchestrator | 2025-11-08 13:39:03.006648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:03.006652 | orchestrator | Saturday 08 November 2025 13:39:02 +0000 (0:00:00.204) 0:00:06.626 ***** 2025-11-08 13:39:03.006656 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006659 | orchestrator | 2025-11-08 13:39:03.006663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:03.006667 | orchestrator | Saturday 08 November 2025 13:39:02 +0000 (0:00:00.198) 0:00:06.825 ***** 2025-11-08 13:39:03.006674 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006678 | orchestrator | 2025-11-08 13:39:03.006682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:03.006699 | orchestrator | Saturday 08 November 2025 13:39:02 +0000 (0:00:00.195) 0:00:07.021 ***** 2025-11-08 13:39:03.006703 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006707 | orchestrator | 2025-11-08 13:39:03.006710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:03.006714 | orchestrator | Saturday 08 November 2025 13:39:02 +0000 (0:00:00.200) 0:00:07.221 ***** 2025-11-08 13:39:03.006718 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:03.006722 | orchestrator | 2025-11-08 13:39:03.006728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:10.462240 | orchestrator | Saturday 08 November 2025 13:39:02 +0000 (0:00:00.193) 0:00:07.415 ***** 2025-11-08 13:39:10.462359 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.462375 | orchestrator | 2025-11-08 13:39:10.462388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:10.462400 | orchestrator | Saturday 08 November 2025 13:39:03 +0000 (0:00:00.207) 0:00:07.622 ***** 2025-11-08 13:39:10.462411 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-11-08 13:39:10.462443 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-11-08 13:39:10.462455 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-11-08 13:39:10.462466 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-11-08 13:39:10.462477 | orchestrator | 2025-11-08 13:39:10.462488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:10.462499 | orchestrator | Saturday 08 November 2025 13:39:04 +0000 (0:00:00.981) 0:00:08.604 ***** 2025-11-08 13:39:10.462510 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.462521 | orchestrator | 2025-11-08 13:39:10.462531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:10.462542 | orchestrator | Saturday 08 November 2025 13:39:04 +0000 (0:00:00.209) 0:00:08.814 ***** 2025-11-08 13:39:10.462553 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.462564 | orchestrator | 2025-11-08 13:39:10.462574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:10.462585 | orchestrator | Saturday 08 November 2025 13:39:04 +0000 (0:00:00.201) 0:00:09.016 ***** 2025-11-08 13:39:10.462595 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.462606 | orchestrator | 2025-11-08 13:39:10.462617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:10.462627 | orchestrator | Saturday 08 November 2025 13:39:04 +0000 (0:00:00.198) 0:00:09.214 ***** 2025-11-08 13:39:10.462638 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.462649 | orchestrator | 2025-11-08 13:39:10.462659 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-08 13:39:10.462670 | orchestrator | Saturday 08 November 2025 13:39:05 +0000 (0:00:00.206) 0:00:09.420 ***** 2025-11-08 13:39:10.462681 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-11-08 13:39:10.462722 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-11-08 13:39:10.462733 | orchestrator | 2025-11-08 13:39:10.462744 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-08 13:39:10.462755 | orchestrator | Saturday 08 November 2025 13:39:05 +0000 (0:00:00.178) 0:00:09.599 ***** 2025-11-08 13:39:10.462765 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.462776 | orchestrator | 2025-11-08 13:39:10.462787 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-08 13:39:10.462797 | orchestrator | Saturday 08 November 2025 13:39:05 +0000 (0:00:00.141) 0:00:09.740 ***** 2025-11-08 13:39:10.462808 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.462818 | orchestrator | 2025-11-08 13:39:10.462829 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-08 13:39:10.462866 | orchestrator | Saturday 08 November 2025 13:39:05 +0000 (0:00:00.142) 0:00:09.883 ***** 2025-11-08 13:39:10.462877 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.462888 | orchestrator | 2025-11-08 13:39:10.462898 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-08 13:39:10.462909 | orchestrator | Saturday 08 November 2025 13:39:05 +0000 (0:00:00.135) 0:00:10.018 ***** 2025-11-08 13:39:10.462920 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:39:10.462931 | orchestrator | 2025-11-08 13:39:10.462941 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-08 13:39:10.462952 | orchestrator | Saturday 08 November 2025 13:39:05 +0000 (0:00:00.137) 0:00:10.156 ***** 2025-11-08 13:39:10.462963 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd56445f-4803-5564-bbe6-d923870c576d'}}) 2025-11-08 13:39:10.462974 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c507e483-80d4-5110-a9ba-f918053b344b'}}) 2025-11-08 13:39:10.462985 | orchestrator | 2025-11-08 13:39:10.462996 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-08 13:39:10.463007 | orchestrator | Saturday 08 November 2025 13:39:05 +0000 (0:00:00.166) 0:00:10.322 ***** 2025-11-08 13:39:10.463019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd56445f-4803-5564-bbe6-d923870c576d'}})  2025-11-08 13:39:10.463038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c507e483-80d4-5110-a9ba-f918053b344b'}})  2025-11-08 13:39:10.463049 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.463060 | orchestrator | 2025-11-08 13:39:10.463070 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-08 13:39:10.463081 | orchestrator | Saturday 08 November 2025 13:39:06 +0000 (0:00:00.145) 0:00:10.467 ***** 2025-11-08 13:39:10.463091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd56445f-4803-5564-bbe6-d923870c576d'}})  2025-11-08 13:39:10.463103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c507e483-80d4-5110-a9ba-f918053b344b'}})  2025-11-08 13:39:10.463113 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.463124 | orchestrator | 2025-11-08 13:39:10.463134 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-08 13:39:10.463145 | orchestrator | Saturday 08 November 2025 13:39:06 +0000 (0:00:00.339) 0:00:10.807 ***** 2025-11-08 13:39:10.463156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd56445f-4803-5564-bbe6-d923870c576d'}})  2025-11-08 13:39:10.463187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c507e483-80d4-5110-a9ba-f918053b344b'}})  2025-11-08 13:39:10.463198 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.463209 | orchestrator | 2025-11-08 13:39:10.463220 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-08 13:39:10.463230 | orchestrator | Saturday 08 November 2025 13:39:06 +0000 (0:00:00.154) 0:00:10.961 ***** 2025-11-08 13:39:10.463241 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:39:10.463252 | orchestrator | 2025-11-08 13:39:10.463262 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-08 13:39:10.463273 | orchestrator | Saturday 08 November 2025 13:39:06 +0000 (0:00:00.145) 0:00:11.107 ***** 2025-11-08 13:39:10.463283 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:39:10.463294 | orchestrator | 2025-11-08 13:39:10.463305 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-08 13:39:10.463315 | orchestrator | Saturday 08 November 2025 13:39:06 +0000 (0:00:00.150) 0:00:11.258 ***** 2025-11-08 13:39:10.463326 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.463336 | orchestrator | 2025-11-08 13:39:10.463347 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-08 13:39:10.463358 | orchestrator | Saturday 08 November 2025 13:39:06 +0000 (0:00:00.137) 0:00:11.395 ***** 2025-11-08 13:39:10.463377 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.463388 | orchestrator | 2025-11-08 13:39:10.463399 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-08 13:39:10.463409 | orchestrator | Saturday 08 November 2025 13:39:07 +0000 (0:00:00.134) 0:00:11.529 ***** 2025-11-08 13:39:10.463420 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.463431 | orchestrator | 2025-11-08 13:39:10.463441 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-08 13:39:10.463452 | orchestrator | Saturday 08 November 2025 13:39:07 +0000 (0:00:00.151) 0:00:11.681 ***** 2025-11-08 13:39:10.463463 | orchestrator | ok: [testbed-node-3] => { 2025-11-08 13:39:10.463473 | orchestrator |  "ceph_osd_devices": { 2025-11-08 13:39:10.463485 | orchestrator |  "sdb": { 2025-11-08 13:39:10.463496 | orchestrator |  "osd_lvm_uuid": "cd56445f-4803-5564-bbe6-d923870c576d" 2025-11-08 13:39:10.463507 | orchestrator |  }, 2025-11-08 13:39:10.463517 | orchestrator |  "sdc": { 2025-11-08 13:39:10.463528 | orchestrator |  "osd_lvm_uuid": "c507e483-80d4-5110-a9ba-f918053b344b" 2025-11-08 13:39:10.463539 | orchestrator |  } 2025-11-08 13:39:10.463550 | orchestrator |  } 2025-11-08 13:39:10.463561 | orchestrator | } 2025-11-08 13:39:10.463571 | orchestrator | 2025-11-08 13:39:10.463582 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-08 13:39:10.463599 | orchestrator | Saturday 08 November 2025 13:39:07 +0000 (0:00:00.134) 0:00:11.815 ***** 2025-11-08 13:39:10.463610 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.463621 | orchestrator | 2025-11-08 13:39:10.463631 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-08 13:39:10.463642 | orchestrator | Saturday 08 November 2025 13:39:07 +0000 (0:00:00.129) 0:00:11.945 ***** 2025-11-08 13:39:10.463652 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.463663 | orchestrator | 2025-11-08 13:39:10.463673 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-08 13:39:10.463699 | orchestrator | Saturday 08 November 2025 13:39:07 +0000 (0:00:00.128) 0:00:12.073 ***** 2025-11-08 13:39:10.463711 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:39:10.463722 | orchestrator | 2025-11-08 13:39:10.463732 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-08 13:39:10.463743 | orchestrator | Saturday 08 November 2025 13:39:07 +0000 (0:00:00.135) 0:00:12.209 ***** 2025-11-08 13:39:10.463753 | orchestrator | changed: [testbed-node-3] => { 2025-11-08 13:39:10.463764 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-08 13:39:10.463775 | orchestrator |  "ceph_osd_devices": { 2025-11-08 13:39:10.463786 | orchestrator |  "sdb": { 2025-11-08 13:39:10.463796 | orchestrator |  "osd_lvm_uuid": "cd56445f-4803-5564-bbe6-d923870c576d" 2025-11-08 13:39:10.463807 | orchestrator |  }, 2025-11-08 13:39:10.463818 | orchestrator |  "sdc": { 2025-11-08 13:39:10.463828 | orchestrator |  "osd_lvm_uuid": "c507e483-80d4-5110-a9ba-f918053b344b" 2025-11-08 13:39:10.463839 | orchestrator |  } 2025-11-08 13:39:10.463850 | orchestrator |  }, 2025-11-08 13:39:10.463860 | orchestrator |  "lvm_volumes": [ 2025-11-08 13:39:10.463871 | orchestrator |  { 2025-11-08 13:39:10.463881 | orchestrator |  "data": "osd-block-cd56445f-4803-5564-bbe6-d923870c576d", 2025-11-08 13:39:10.463892 | orchestrator |  "data_vg": "ceph-cd56445f-4803-5564-bbe6-d923870c576d" 2025-11-08 13:39:10.463903 | orchestrator |  }, 2025-11-08 13:39:10.463913 | orchestrator |  { 2025-11-08 13:39:10.463924 | orchestrator |  "data": "osd-block-c507e483-80d4-5110-a9ba-f918053b344b", 2025-11-08 13:39:10.463934 | orchestrator |  "data_vg": "ceph-c507e483-80d4-5110-a9ba-f918053b344b" 2025-11-08 13:39:10.463945 | orchestrator |  } 2025-11-08 13:39:10.463956 | orchestrator |  ] 2025-11-08 13:39:10.463966 | orchestrator |  } 2025-11-08 13:39:10.463985 | orchestrator | } 2025-11-08 13:39:10.463995 | orchestrator | 2025-11-08 13:39:10.464006 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-08 13:39:10.464017 | orchestrator | Saturday 08 November 2025 13:39:08 +0000 (0:00:00.381) 0:00:12.590 ***** 2025-11-08 13:39:10.464027 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 13:39:10.464038 | orchestrator | 2025-11-08 13:39:10.464048 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-08 13:39:10.464059 | orchestrator | 2025-11-08 13:39:10.464069 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-08 13:39:10.464080 | orchestrator | Saturday 08 November 2025 13:39:09 +0000 (0:00:01.770) 0:00:14.361 ***** 2025-11-08 13:39:10.464090 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-08 13:39:10.464101 | orchestrator | 2025-11-08 13:39:10.464111 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-08 13:39:10.464122 | orchestrator | Saturday 08 November 2025 13:39:10 +0000 (0:00:00.285) 0:00:14.647 ***** 2025-11-08 13:39:10.464133 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:39:10.464143 | orchestrator | 2025-11-08 13:39:10.464161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480048 | orchestrator | Saturday 08 November 2025 13:39:10 +0000 (0:00:00.222) 0:00:14.869 ***** 2025-11-08 13:39:17.480168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-11-08 13:39:17.480192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-11-08 13:39:17.480213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-11-08 13:39:17.480232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-11-08 13:39:17.480251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-11-08 13:39:17.480271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-11-08 13:39:17.480289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-11-08 13:39:17.480327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-11-08 13:39:17.480339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-11-08 13:39:17.480350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-11-08 13:39:17.480360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-11-08 13:39:17.480377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-11-08 13:39:17.480388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-11-08 13:39:17.480399 | orchestrator | 2025-11-08 13:39:17.480412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480422 | orchestrator | Saturday 08 November 2025 13:39:10 +0000 (0:00:00.397) 0:00:15.266 ***** 2025-11-08 13:39:17.480433 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.480445 | orchestrator | 2025-11-08 13:39:17.480456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480467 | orchestrator | Saturday 08 November 2025 13:39:11 +0000 (0:00:00.206) 0:00:15.473 ***** 2025-11-08 13:39:17.480477 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.480488 | orchestrator | 2025-11-08 13:39:17.480499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480509 | orchestrator | Saturday 08 November 2025 13:39:11 +0000 (0:00:00.265) 0:00:15.739 ***** 2025-11-08 13:39:17.480520 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.480531 | orchestrator | 2025-11-08 13:39:17.480542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480576 | orchestrator | Saturday 08 November 2025 13:39:11 +0000 (0:00:00.178) 0:00:15.918 ***** 2025-11-08 13:39:17.480590 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.480602 | orchestrator | 2025-11-08 13:39:17.480615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480627 | orchestrator | Saturday 08 November 2025 13:39:11 +0000 (0:00:00.213) 0:00:16.131 ***** 2025-11-08 13:39:17.480639 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.480651 | orchestrator | 2025-11-08 13:39:17.480664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480676 | orchestrator | Saturday 08 November 2025 13:39:12 +0000 (0:00:00.576) 0:00:16.708 ***** 2025-11-08 13:39:17.480730 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.480742 | orchestrator | 2025-11-08 13:39:17.480754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480767 | orchestrator | Saturday 08 November 2025 13:39:12 +0000 (0:00:00.208) 0:00:16.917 ***** 2025-11-08 13:39:17.480779 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.480791 | orchestrator | 2025-11-08 13:39:17.480803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480815 | orchestrator | Saturday 08 November 2025 13:39:12 +0000 (0:00:00.191) 0:00:17.109 ***** 2025-11-08 13:39:17.480827 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.480840 | orchestrator | 2025-11-08 13:39:17.480852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480864 | orchestrator | Saturday 08 November 2025 13:39:12 +0000 (0:00:00.182) 0:00:17.291 ***** 2025-11-08 13:39:17.480876 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d) 2025-11-08 13:39:17.480889 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d) 2025-11-08 13:39:17.480902 | orchestrator | 2025-11-08 13:39:17.480914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480925 | orchestrator | Saturday 08 November 2025 13:39:13 +0000 (0:00:00.405) 0:00:17.696 ***** 2025-11-08 13:39:17.480936 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb) 2025-11-08 13:39:17.480947 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb) 2025-11-08 13:39:17.480957 | orchestrator | 2025-11-08 13:39:17.480968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.480979 | orchestrator | Saturday 08 November 2025 13:39:13 +0000 (0:00:00.449) 0:00:18.145 ***** 2025-11-08 13:39:17.480989 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c) 2025-11-08 13:39:17.481000 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c) 2025-11-08 13:39:17.481011 | orchestrator | 2025-11-08 13:39:17.481022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.481051 | orchestrator | Saturday 08 November 2025 13:39:14 +0000 (0:00:00.370) 0:00:18.516 ***** 2025-11-08 13:39:17.481062 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d) 2025-11-08 13:39:17.481073 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d) 2025-11-08 13:39:17.481084 | orchestrator | 2025-11-08 13:39:17.481101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:17.481113 | orchestrator | Saturday 08 November 2025 13:39:14 +0000 (0:00:00.355) 0:00:18.871 ***** 2025-11-08 13:39:17.481124 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-08 13:39:17.481134 | orchestrator | 2025-11-08 13:39:17.481145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481156 | orchestrator | Saturday 08 November 2025 13:39:14 +0000 (0:00:00.245) 0:00:19.117 ***** 2025-11-08 13:39:17.481176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-11-08 13:39:17.481187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-11-08 13:39:17.481198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-11-08 13:39:17.481209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-11-08 13:39:17.481219 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-11-08 13:39:17.481230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-11-08 13:39:17.481241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-11-08 13:39:17.481252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-11-08 13:39:17.481262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-11-08 13:39:17.481273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-11-08 13:39:17.481284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-11-08 13:39:17.481295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-11-08 13:39:17.481305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-11-08 13:39:17.481316 | orchestrator | 2025-11-08 13:39:17.481327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481338 | orchestrator | Saturday 08 November 2025 13:39:14 +0000 (0:00:00.280) 0:00:19.397 ***** 2025-11-08 13:39:17.481349 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.481360 | orchestrator | 2025-11-08 13:39:17.481370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481381 | orchestrator | Saturday 08 November 2025 13:39:15 +0000 (0:00:00.453) 0:00:19.851 ***** 2025-11-08 13:39:17.481392 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.481403 | orchestrator | 2025-11-08 13:39:17.481414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481424 | orchestrator | Saturday 08 November 2025 13:39:15 +0000 (0:00:00.169) 0:00:20.020 ***** 2025-11-08 13:39:17.481435 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.481446 | orchestrator | 2025-11-08 13:39:17.481457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481468 | orchestrator | Saturday 08 November 2025 13:39:15 +0000 (0:00:00.169) 0:00:20.190 ***** 2025-11-08 13:39:17.481479 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.481490 | orchestrator | 2025-11-08 13:39:17.481500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481511 | orchestrator | Saturday 08 November 2025 13:39:15 +0000 (0:00:00.162) 0:00:20.353 ***** 2025-11-08 13:39:17.481522 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.481550 | orchestrator | 2025-11-08 13:39:17.481561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481584 | orchestrator | Saturday 08 November 2025 13:39:16 +0000 (0:00:00.164) 0:00:20.517 ***** 2025-11-08 13:39:17.481595 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.481606 | orchestrator | 2025-11-08 13:39:17.481616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481627 | orchestrator | Saturday 08 November 2025 13:39:16 +0000 (0:00:00.166) 0:00:20.684 ***** 2025-11-08 13:39:17.481638 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.481648 | orchestrator | 2025-11-08 13:39:17.481659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481670 | orchestrator | Saturday 08 November 2025 13:39:16 +0000 (0:00:00.160) 0:00:20.845 ***** 2025-11-08 13:39:17.481703 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:17.481715 | orchestrator | 2025-11-08 13:39:17.481726 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481736 | orchestrator | Saturday 08 November 2025 13:39:16 +0000 (0:00:00.170) 0:00:21.015 ***** 2025-11-08 13:39:17.481747 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-11-08 13:39:17.481759 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-11-08 13:39:17.481770 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-11-08 13:39:17.481781 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-11-08 13:39:17.481791 | orchestrator | 2025-11-08 13:39:17.481802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:17.481813 | orchestrator | Saturday 08 November 2025 13:39:17 +0000 (0:00:00.705) 0:00:21.720 ***** 2025-11-08 13:39:17.481824 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.166992 | orchestrator | 2025-11-08 13:39:23.167081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:23.167096 | orchestrator | Saturday 08 November 2025 13:39:17 +0000 (0:00:00.172) 0:00:21.893 ***** 2025-11-08 13:39:23.167108 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.167120 | orchestrator | 2025-11-08 13:39:23.167131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:23.167165 | orchestrator | Saturday 08 November 2025 13:39:17 +0000 (0:00:00.184) 0:00:22.077 ***** 2025-11-08 13:39:23.167177 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.167188 | orchestrator | 2025-11-08 13:39:23.167199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:23.167210 | orchestrator | Saturday 08 November 2025 13:39:17 +0000 (0:00:00.180) 0:00:22.257 ***** 2025-11-08 13:39:23.167221 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.167231 | orchestrator | 2025-11-08 13:39:23.167242 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-08 13:39:23.167253 | orchestrator | Saturday 08 November 2025 13:39:18 +0000 (0:00:00.540) 0:00:22.798 ***** 2025-11-08 13:39:23.167264 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-11-08 13:39:23.167275 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-11-08 13:39:23.167286 | orchestrator | 2025-11-08 13:39:23.167296 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-08 13:39:23.167307 | orchestrator | Saturday 08 November 2025 13:39:18 +0000 (0:00:00.151) 0:00:22.950 ***** 2025-11-08 13:39:23.167317 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.167328 | orchestrator | 2025-11-08 13:39:23.167339 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-08 13:39:23.167349 | orchestrator | Saturday 08 November 2025 13:39:18 +0000 (0:00:00.106) 0:00:23.056 ***** 2025-11-08 13:39:23.167360 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.167370 | orchestrator | 2025-11-08 13:39:23.167381 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-08 13:39:23.167392 | orchestrator | Saturday 08 November 2025 13:39:18 +0000 (0:00:00.106) 0:00:23.163 ***** 2025-11-08 13:39:23.167402 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.167413 | orchestrator | 2025-11-08 13:39:23.167423 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-08 13:39:23.167434 | orchestrator | Saturday 08 November 2025 13:39:18 +0000 (0:00:00.111) 0:00:23.274 ***** 2025-11-08 13:39:23.167445 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:39:23.167456 | orchestrator | 2025-11-08 13:39:23.167467 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-08 13:39:23.167478 | orchestrator | Saturday 08 November 2025 13:39:18 +0000 (0:00:00.111) 0:00:23.386 ***** 2025-11-08 13:39:23.167489 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f393addc-5b9a-54bf-a4a6-7d44f9449202'}}) 2025-11-08 13:39:23.167500 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '380ddcdc-ed2e-5f5e-8a3f-001787d903df'}}) 2025-11-08 13:39:23.167537 | orchestrator | 2025-11-08 13:39:23.167550 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-08 13:39:23.167562 | orchestrator | Saturday 08 November 2025 13:39:19 +0000 (0:00:00.114) 0:00:23.500 ***** 2025-11-08 13:39:23.167575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f393addc-5b9a-54bf-a4a6-7d44f9449202'}})  2025-11-08 13:39:23.167589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '380ddcdc-ed2e-5f5e-8a3f-001787d903df'}})  2025-11-08 13:39:23.167602 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.167614 | orchestrator | 2025-11-08 13:39:23.167626 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-08 13:39:23.167639 | orchestrator | Saturday 08 November 2025 13:39:19 +0000 (0:00:00.115) 0:00:23.616 ***** 2025-11-08 13:39:23.167651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f393addc-5b9a-54bf-a4a6-7d44f9449202'}})  2025-11-08 13:39:23.167663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '380ddcdc-ed2e-5f5e-8a3f-001787d903df'}})  2025-11-08 13:39:23.167674 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.167716 | orchestrator | 2025-11-08 13:39:23.167736 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-08 13:39:23.167754 | orchestrator | Saturday 08 November 2025 13:39:19 +0000 (0:00:00.127) 0:00:23.743 ***** 2025-11-08 13:39:23.167774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f393addc-5b9a-54bf-a4a6-7d44f9449202'}})  2025-11-08 13:39:23.167792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '380ddcdc-ed2e-5f5e-8a3f-001787d903df'}})  2025-11-08 13:39:23.167808 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.167818 | orchestrator | 2025-11-08 13:39:23.167829 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-08 13:39:23.167840 | orchestrator | Saturday 08 November 2025 13:39:19 +0000 (0:00:00.136) 0:00:23.880 ***** 2025-11-08 13:39:23.167851 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:39:23.167861 | orchestrator | 2025-11-08 13:39:23.167872 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-08 13:39:23.167882 | orchestrator | Saturday 08 November 2025 13:39:19 +0000 (0:00:00.141) 0:00:24.022 ***** 2025-11-08 13:39:23.167893 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:39:23.167904 | orchestrator | 2025-11-08 13:39:23.167914 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-08 13:39:23.167925 | orchestrator | Saturday 08 November 2025 13:39:19 +0000 (0:00:00.146) 0:00:24.168 ***** 2025-11-08 13:39:23.167953 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.167965 | orchestrator | 2025-11-08 13:39:23.167976 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-08 13:39:23.167986 | orchestrator | Saturday 08 November 2025 13:39:20 +0000 (0:00:00.284) 0:00:24.453 ***** 2025-11-08 13:39:23.167997 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.168008 | orchestrator | 2025-11-08 13:39:23.168018 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-08 13:39:23.168029 | orchestrator | Saturday 08 November 2025 13:39:20 +0000 (0:00:00.122) 0:00:24.575 ***** 2025-11-08 13:39:23.168040 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.168050 | orchestrator | 2025-11-08 13:39:23.168061 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-08 13:39:23.168072 | orchestrator | Saturday 08 November 2025 13:39:20 +0000 (0:00:00.125) 0:00:24.701 ***** 2025-11-08 13:39:23.168082 | orchestrator | ok: [testbed-node-4] => { 2025-11-08 13:39:23.168094 | orchestrator |  "ceph_osd_devices": { 2025-11-08 13:39:23.168105 | orchestrator |  "sdb": { 2025-11-08 13:39:23.168116 | orchestrator |  "osd_lvm_uuid": "f393addc-5b9a-54bf-a4a6-7d44f9449202" 2025-11-08 13:39:23.168136 | orchestrator |  }, 2025-11-08 13:39:23.168148 | orchestrator |  "sdc": { 2025-11-08 13:39:23.168165 | orchestrator |  "osd_lvm_uuid": "380ddcdc-ed2e-5f5e-8a3f-001787d903df" 2025-11-08 13:39:23.168177 | orchestrator |  } 2025-11-08 13:39:23.168188 | orchestrator |  } 2025-11-08 13:39:23.168199 | orchestrator | } 2025-11-08 13:39:23.168210 | orchestrator | 2025-11-08 13:39:23.168221 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-08 13:39:23.168232 | orchestrator | Saturday 08 November 2025 13:39:20 +0000 (0:00:00.135) 0:00:24.837 ***** 2025-11-08 13:39:23.168243 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.168253 | orchestrator | 2025-11-08 13:39:23.168264 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-08 13:39:23.168275 | orchestrator | Saturday 08 November 2025 13:39:20 +0000 (0:00:00.127) 0:00:24.964 ***** 2025-11-08 13:39:23.168286 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.168296 | orchestrator | 2025-11-08 13:39:23.168307 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-08 13:39:23.168318 | orchestrator | Saturday 08 November 2025 13:39:20 +0000 (0:00:00.128) 0:00:25.093 ***** 2025-11-08 13:39:23.168328 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:39:23.168339 | orchestrator | 2025-11-08 13:39:23.168350 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-08 13:39:23.168361 | orchestrator | Saturday 08 November 2025 13:39:20 +0000 (0:00:00.129) 0:00:25.222 ***** 2025-11-08 13:39:23.168372 | orchestrator | changed: [testbed-node-4] => { 2025-11-08 13:39:23.168383 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-08 13:39:23.168394 | orchestrator |  "ceph_osd_devices": { 2025-11-08 13:39:23.168410 | orchestrator |  "sdb": { 2025-11-08 13:39:23.168421 | orchestrator |  "osd_lvm_uuid": "f393addc-5b9a-54bf-a4a6-7d44f9449202" 2025-11-08 13:39:23.168432 | orchestrator |  }, 2025-11-08 13:39:23.168443 | orchestrator |  "sdc": { 2025-11-08 13:39:23.168454 | orchestrator |  "osd_lvm_uuid": "380ddcdc-ed2e-5f5e-8a3f-001787d903df" 2025-11-08 13:39:23.168465 | orchestrator |  } 2025-11-08 13:39:23.168475 | orchestrator |  }, 2025-11-08 13:39:23.168486 | orchestrator |  "lvm_volumes": [ 2025-11-08 13:39:23.168497 | orchestrator |  { 2025-11-08 13:39:23.168508 | orchestrator |  "data": "osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202", 2025-11-08 13:39:23.168518 | orchestrator |  "data_vg": "ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202" 2025-11-08 13:39:23.168529 | orchestrator |  }, 2025-11-08 13:39:23.168540 | orchestrator |  { 2025-11-08 13:39:23.168550 | orchestrator |  "data": "osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df", 2025-11-08 13:39:23.168561 | orchestrator |  "data_vg": "ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df" 2025-11-08 13:39:23.168572 | orchestrator |  } 2025-11-08 13:39:23.168583 | orchestrator |  ] 2025-11-08 13:39:23.168593 | orchestrator |  } 2025-11-08 13:39:23.168604 | orchestrator | } 2025-11-08 13:39:23.168614 | orchestrator | 2025-11-08 13:39:23.168625 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-08 13:39:23.168636 | orchestrator | Saturday 08 November 2025 13:39:20 +0000 (0:00:00.159) 0:00:25.382 ***** 2025-11-08 13:39:23.168647 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-08 13:39:23.168658 | orchestrator | 2025-11-08 13:39:23.168668 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-08 13:39:23.168679 | orchestrator | 2025-11-08 13:39:23.168717 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-08 13:39:23.168736 | orchestrator | Saturday 08 November 2025 13:39:21 +0000 (0:00:00.969) 0:00:26.351 ***** 2025-11-08 13:39:23.168754 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-08 13:39:23.168772 | orchestrator | 2025-11-08 13:39:23.168790 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-08 13:39:23.168819 | orchestrator | Saturday 08 November 2025 13:39:22 +0000 (0:00:00.547) 0:00:26.899 ***** 2025-11-08 13:39:23.168837 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:39:23.168854 | orchestrator | 2025-11-08 13:39:23.168871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:23.168887 | orchestrator | Saturday 08 November 2025 13:39:22 +0000 (0:00:00.242) 0:00:27.142 ***** 2025-11-08 13:39:23.168905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-11-08 13:39:23.168922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-11-08 13:39:23.168939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-11-08 13:39:23.168956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-11-08 13:39:23.168973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-11-08 13:39:23.168999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-11-08 13:39:30.732607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-11-08 13:39:30.732745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-11-08 13:39:30.732762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-11-08 13:39:30.732774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-11-08 13:39:30.732784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-11-08 13:39:30.732795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-11-08 13:39:30.732805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-11-08 13:39:30.732817 | orchestrator | 2025-11-08 13:39:30.732829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.732840 | orchestrator | Saturday 08 November 2025 13:39:23 +0000 (0:00:00.432) 0:00:27.574 ***** 2025-11-08 13:39:30.732852 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.732872 | orchestrator | 2025-11-08 13:39:30.732890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.732908 | orchestrator | Saturday 08 November 2025 13:39:23 +0000 (0:00:00.180) 0:00:27.754 ***** 2025-11-08 13:39:30.732933 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.732956 | orchestrator | 2025-11-08 13:39:30.732973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.732993 | orchestrator | Saturday 08 November 2025 13:39:23 +0000 (0:00:00.183) 0:00:27.938 ***** 2025-11-08 13:39:30.733012 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.733030 | orchestrator | 2025-11-08 13:39:30.733045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.733056 | orchestrator | Saturday 08 November 2025 13:39:23 +0000 (0:00:00.240) 0:00:28.179 ***** 2025-11-08 13:39:30.733066 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.733077 | orchestrator | 2025-11-08 13:39:30.733087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.733098 | orchestrator | Saturday 08 November 2025 13:39:23 +0000 (0:00:00.197) 0:00:28.376 ***** 2025-11-08 13:39:30.733109 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.733121 | orchestrator | 2025-11-08 13:39:30.733133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.733145 | orchestrator | Saturday 08 November 2025 13:39:24 +0000 (0:00:00.198) 0:00:28.575 ***** 2025-11-08 13:39:30.733157 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.733169 | orchestrator | 2025-11-08 13:39:30.733200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.733237 | orchestrator | Saturday 08 November 2025 13:39:24 +0000 (0:00:00.183) 0:00:28.759 ***** 2025-11-08 13:39:30.733250 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.733262 | orchestrator | 2025-11-08 13:39:30.733275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.733287 | orchestrator | Saturday 08 November 2025 13:39:24 +0000 (0:00:00.178) 0:00:28.937 ***** 2025-11-08 13:39:30.733298 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.733309 | orchestrator | 2025-11-08 13:39:30.733322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.733334 | orchestrator | Saturday 08 November 2025 13:39:24 +0000 (0:00:00.175) 0:00:29.112 ***** 2025-11-08 13:39:30.733346 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165) 2025-11-08 13:39:30.733359 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165) 2025-11-08 13:39:30.733372 | orchestrator | 2025-11-08 13:39:30.733384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.733396 | orchestrator | Saturday 08 November 2025 13:39:25 +0000 (0:00:00.646) 0:00:29.759 ***** 2025-11-08 13:39:30.733408 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff) 2025-11-08 13:39:30.733420 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff) 2025-11-08 13:39:30.733432 | orchestrator | 2025-11-08 13:39:30.733444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.733456 | orchestrator | Saturday 08 November 2025 13:39:25 +0000 (0:00:00.486) 0:00:30.245 ***** 2025-11-08 13:39:30.733468 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36) 2025-11-08 13:39:30.733480 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36) 2025-11-08 13:39:30.733491 | orchestrator | 2025-11-08 13:39:30.733501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.733512 | orchestrator | Saturday 08 November 2025 13:39:26 +0000 (0:00:00.465) 0:00:30.711 ***** 2025-11-08 13:39:30.733522 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995) 2025-11-08 13:39:30.733533 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995) 2025-11-08 13:39:30.733544 | orchestrator | 2025-11-08 13:39:30.733554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:39:30.733565 | orchestrator | Saturday 08 November 2025 13:39:26 +0000 (0:00:00.475) 0:00:31.187 ***** 2025-11-08 13:39:30.733575 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-08 13:39:30.733586 | orchestrator | 2025-11-08 13:39:30.733597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.733626 | orchestrator | Saturday 08 November 2025 13:39:27 +0000 (0:00:00.341) 0:00:31.528 ***** 2025-11-08 13:39:30.733637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-11-08 13:39:30.733648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-11-08 13:39:30.733659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-11-08 13:39:30.733669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-11-08 13:39:30.733680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-11-08 13:39:30.733726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-11-08 13:39:30.733738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-11-08 13:39:30.733749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-11-08 13:39:30.733768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-11-08 13:39:30.733779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-11-08 13:39:30.733790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-11-08 13:39:30.733800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-11-08 13:39:30.733811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-11-08 13:39:30.733822 | orchestrator | 2025-11-08 13:39:30.733832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.733843 | orchestrator | Saturday 08 November 2025 13:39:27 +0000 (0:00:00.418) 0:00:31.947 ***** 2025-11-08 13:39:30.733853 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.733864 | orchestrator | 2025-11-08 13:39:30.733875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.733885 | orchestrator | Saturday 08 November 2025 13:39:27 +0000 (0:00:00.187) 0:00:32.135 ***** 2025-11-08 13:39:30.733896 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.733906 | orchestrator | 2025-11-08 13:39:30.733917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.733928 | orchestrator | Saturday 08 November 2025 13:39:27 +0000 (0:00:00.221) 0:00:32.357 ***** 2025-11-08 13:39:30.733938 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.733949 | orchestrator | 2025-11-08 13:39:30.733959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.733970 | orchestrator | Saturday 08 November 2025 13:39:28 +0000 (0:00:00.200) 0:00:32.558 ***** 2025-11-08 13:39:30.733990 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.734009 | orchestrator | 2025-11-08 13:39:30.734136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.734156 | orchestrator | Saturday 08 November 2025 13:39:28 +0000 (0:00:00.209) 0:00:32.767 ***** 2025-11-08 13:39:30.734172 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.734183 | orchestrator | 2025-11-08 13:39:30.734193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.734204 | orchestrator | Saturday 08 November 2025 13:39:28 +0000 (0:00:00.253) 0:00:33.021 ***** 2025-11-08 13:39:30.734215 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.734225 | orchestrator | 2025-11-08 13:39:30.734236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.734247 | orchestrator | Saturday 08 November 2025 13:39:29 +0000 (0:00:00.526) 0:00:33.547 ***** 2025-11-08 13:39:30.734257 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.734268 | orchestrator | 2025-11-08 13:39:30.734279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.734289 | orchestrator | Saturday 08 November 2025 13:39:29 +0000 (0:00:00.182) 0:00:33.730 ***** 2025-11-08 13:39:30.734300 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.734311 | orchestrator | 2025-11-08 13:39:30.734321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.734332 | orchestrator | Saturday 08 November 2025 13:39:29 +0000 (0:00:00.162) 0:00:33.893 ***** 2025-11-08 13:39:30.734343 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-11-08 13:39:30.734354 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-11-08 13:39:30.734365 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-11-08 13:39:30.734375 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-11-08 13:39:30.734386 | orchestrator | 2025-11-08 13:39:30.734397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.734407 | orchestrator | Saturday 08 November 2025 13:39:30 +0000 (0:00:00.620) 0:00:34.514 ***** 2025-11-08 13:39:30.734418 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.734439 | orchestrator | 2025-11-08 13:39:30.734450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.734470 | orchestrator | Saturday 08 November 2025 13:39:30 +0000 (0:00:00.179) 0:00:34.693 ***** 2025-11-08 13:39:30.734481 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.734492 | orchestrator | 2025-11-08 13:39:30.734503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.734514 | orchestrator | Saturday 08 November 2025 13:39:30 +0000 (0:00:00.140) 0:00:34.834 ***** 2025-11-08 13:39:30.734525 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.734535 | orchestrator | 2025-11-08 13:39:30.734546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:39:30.734557 | orchestrator | Saturday 08 November 2025 13:39:30 +0000 (0:00:00.139) 0:00:34.973 ***** 2025-11-08 13:39:30.734568 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:30.734579 | orchestrator | 2025-11-08 13:39:30.734599 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-08 13:39:33.953897 | orchestrator | Saturday 08 November 2025 13:39:30 +0000 (0:00:00.168) 0:00:35.142 ***** 2025-11-08 13:39:33.954000 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-11-08 13:39:33.954011 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-11-08 13:39:33.954054 | orchestrator | 2025-11-08 13:39:33.954062 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-08 13:39:33.954070 | orchestrator | Saturday 08 November 2025 13:39:30 +0000 (0:00:00.183) 0:00:35.326 ***** 2025-11-08 13:39:33.954076 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954083 | orchestrator | 2025-11-08 13:39:33.954090 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-08 13:39:33.954097 | orchestrator | Saturday 08 November 2025 13:39:30 +0000 (0:00:00.088) 0:00:35.414 ***** 2025-11-08 13:39:33.954103 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954109 | orchestrator | 2025-11-08 13:39:33.954115 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-08 13:39:33.954121 | orchestrator | Saturday 08 November 2025 13:39:31 +0000 (0:00:00.087) 0:00:35.501 ***** 2025-11-08 13:39:33.954127 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954134 | orchestrator | 2025-11-08 13:39:33.954140 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-08 13:39:33.954146 | orchestrator | Saturday 08 November 2025 13:39:31 +0000 (0:00:00.223) 0:00:35.725 ***** 2025-11-08 13:39:33.954152 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:39:33.954160 | orchestrator | 2025-11-08 13:39:33.954167 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-08 13:39:33.954173 | orchestrator | Saturday 08 November 2025 13:39:31 +0000 (0:00:00.090) 0:00:35.815 ***** 2025-11-08 13:39:33.954180 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '56ba2a68-c761-5674-9bd2-a2481e6b0580'}}) 2025-11-08 13:39:33.954187 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5af892c-b8e4-5298-acf4-1670635abe97'}}) 2025-11-08 13:39:33.954192 | orchestrator | 2025-11-08 13:39:33.954198 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-08 13:39:33.954204 | orchestrator | Saturday 08 November 2025 13:39:31 +0000 (0:00:00.109) 0:00:35.925 ***** 2025-11-08 13:39:33.954210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '56ba2a68-c761-5674-9bd2-a2481e6b0580'}})  2025-11-08 13:39:33.954233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5af892c-b8e4-5298-acf4-1670635abe97'}})  2025-11-08 13:39:33.954240 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954246 | orchestrator | 2025-11-08 13:39:33.954252 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-08 13:39:33.954258 | orchestrator | Saturday 08 November 2025 13:39:31 +0000 (0:00:00.103) 0:00:36.029 ***** 2025-11-08 13:39:33.954283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '56ba2a68-c761-5674-9bd2-a2481e6b0580'}})  2025-11-08 13:39:33.954290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5af892c-b8e4-5298-acf4-1670635abe97'}})  2025-11-08 13:39:33.954296 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954302 | orchestrator | 2025-11-08 13:39:33.954308 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-08 13:39:33.954314 | orchestrator | Saturday 08 November 2025 13:39:31 +0000 (0:00:00.100) 0:00:36.129 ***** 2025-11-08 13:39:33.954320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '56ba2a68-c761-5674-9bd2-a2481e6b0580'}})  2025-11-08 13:39:33.954326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5af892c-b8e4-5298-acf4-1670635abe97'}})  2025-11-08 13:39:33.954332 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954337 | orchestrator | 2025-11-08 13:39:33.954344 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-08 13:39:33.954350 | orchestrator | Saturday 08 November 2025 13:39:31 +0000 (0:00:00.096) 0:00:36.226 ***** 2025-11-08 13:39:33.954356 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:39:33.954361 | orchestrator | 2025-11-08 13:39:33.954368 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-08 13:39:33.954373 | orchestrator | Saturday 08 November 2025 13:39:31 +0000 (0:00:00.089) 0:00:36.316 ***** 2025-11-08 13:39:33.954378 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:39:33.954384 | orchestrator | 2025-11-08 13:39:33.954389 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-08 13:39:33.954395 | orchestrator | Saturday 08 November 2025 13:39:31 +0000 (0:00:00.090) 0:00:36.406 ***** 2025-11-08 13:39:33.954401 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954407 | orchestrator | 2025-11-08 13:39:33.954412 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-08 13:39:33.954418 | orchestrator | Saturday 08 November 2025 13:39:32 +0000 (0:00:00.088) 0:00:36.494 ***** 2025-11-08 13:39:33.954423 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954429 | orchestrator | 2025-11-08 13:39:33.954435 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-08 13:39:33.954441 | orchestrator | Saturday 08 November 2025 13:39:32 +0000 (0:00:00.084) 0:00:36.579 ***** 2025-11-08 13:39:33.954448 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954454 | orchestrator | 2025-11-08 13:39:33.954460 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-08 13:39:33.954466 | orchestrator | Saturday 08 November 2025 13:39:32 +0000 (0:00:00.093) 0:00:36.672 ***** 2025-11-08 13:39:33.954472 | orchestrator | ok: [testbed-node-5] => { 2025-11-08 13:39:33.954478 | orchestrator |  "ceph_osd_devices": { 2025-11-08 13:39:33.954484 | orchestrator |  "sdb": { 2025-11-08 13:39:33.954509 | orchestrator |  "osd_lvm_uuid": "56ba2a68-c761-5674-9bd2-a2481e6b0580" 2025-11-08 13:39:33.954516 | orchestrator |  }, 2025-11-08 13:39:33.954523 | orchestrator |  "sdc": { 2025-11-08 13:39:33.954529 | orchestrator |  "osd_lvm_uuid": "b5af892c-b8e4-5298-acf4-1670635abe97" 2025-11-08 13:39:33.954565 | orchestrator |  } 2025-11-08 13:39:33.954572 | orchestrator |  } 2025-11-08 13:39:33.954578 | orchestrator | } 2025-11-08 13:39:33.954585 | orchestrator | 2025-11-08 13:39:33.954592 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-08 13:39:33.954597 | orchestrator | Saturday 08 November 2025 13:39:32 +0000 (0:00:00.112) 0:00:36.784 ***** 2025-11-08 13:39:33.954603 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954609 | orchestrator | 2025-11-08 13:39:33.954615 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-08 13:39:33.954621 | orchestrator | Saturday 08 November 2025 13:39:32 +0000 (0:00:00.121) 0:00:36.906 ***** 2025-11-08 13:39:33.954635 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954641 | orchestrator | 2025-11-08 13:39:33.954648 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-08 13:39:33.954654 | orchestrator | Saturday 08 November 2025 13:39:32 +0000 (0:00:00.295) 0:00:37.202 ***** 2025-11-08 13:39:33.954660 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:39:33.954666 | orchestrator | 2025-11-08 13:39:33.954672 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-08 13:39:33.954678 | orchestrator | Saturday 08 November 2025 13:39:32 +0000 (0:00:00.114) 0:00:37.317 ***** 2025-11-08 13:39:33.954699 | orchestrator | changed: [testbed-node-5] => { 2025-11-08 13:39:33.954705 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-08 13:39:33.954711 | orchestrator |  "ceph_osd_devices": { 2025-11-08 13:39:33.954718 | orchestrator |  "sdb": { 2025-11-08 13:39:33.954724 | orchestrator |  "osd_lvm_uuid": "56ba2a68-c761-5674-9bd2-a2481e6b0580" 2025-11-08 13:39:33.954731 | orchestrator |  }, 2025-11-08 13:39:33.954737 | orchestrator |  "sdc": { 2025-11-08 13:39:33.954745 | orchestrator |  "osd_lvm_uuid": "b5af892c-b8e4-5298-acf4-1670635abe97" 2025-11-08 13:39:33.954751 | orchestrator |  } 2025-11-08 13:39:33.954758 | orchestrator |  }, 2025-11-08 13:39:33.954765 | orchestrator |  "lvm_volumes": [ 2025-11-08 13:39:33.954772 | orchestrator |  { 2025-11-08 13:39:33.954779 | orchestrator |  "data": "osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580", 2025-11-08 13:39:33.954785 | orchestrator |  "data_vg": "ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580" 2025-11-08 13:39:33.954792 | orchestrator |  }, 2025-11-08 13:39:33.954798 | orchestrator |  { 2025-11-08 13:39:33.954806 | orchestrator |  "data": "osd-block-b5af892c-b8e4-5298-acf4-1670635abe97", 2025-11-08 13:39:33.954812 | orchestrator |  "data_vg": "ceph-b5af892c-b8e4-5298-acf4-1670635abe97" 2025-11-08 13:39:33.954828 | orchestrator |  } 2025-11-08 13:39:33.954839 | orchestrator |  ] 2025-11-08 13:39:33.954845 | orchestrator |  } 2025-11-08 13:39:33.954851 | orchestrator | } 2025-11-08 13:39:33.954857 | orchestrator | 2025-11-08 13:39:33.954863 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-08 13:39:33.954869 | orchestrator | Saturday 08 November 2025 13:39:33 +0000 (0:00:00.191) 0:00:37.509 ***** 2025-11-08 13:39:33.954875 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-08 13:39:33.954882 | orchestrator | 2025-11-08 13:39:33.954888 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:39:33.954895 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-08 13:39:33.954903 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-08 13:39:33.954910 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-08 13:39:33.954916 | orchestrator | 2025-11-08 13:39:33.954922 | orchestrator | 2025-11-08 13:39:33.954928 | orchestrator | 2025-11-08 13:39:33.954933 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:39:33.954939 | orchestrator | Saturday 08 November 2025 13:39:33 +0000 (0:00:00.841) 0:00:38.350 ***** 2025-11-08 13:39:33.954945 | orchestrator | =============================================================================== 2025-11-08 13:39:33.954950 | orchestrator | Write configuration file ------------------------------------------------ 3.58s 2025-11-08 13:39:33.954956 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2025-11-08 13:39:33.954962 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.08s 2025-11-08 13:39:33.954969 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2025-11-08 13:39:33.954982 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2025-11-08 13:39:33.954988 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-11-08 13:39:33.954994 | orchestrator | Print configuration data ------------------------------------------------ 0.73s 2025-11-08 13:39:33.955000 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-11-08 13:39:33.955005 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-11-08 13:39:33.955011 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-11-08 13:39:33.955016 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-11-08 13:39:33.955022 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-11-08 13:39:33.955028 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-11-08 13:39:33.955043 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-11-08 13:39:34.162344 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.57s 2025-11-08 13:39:34.162421 | orchestrator | Print DB devices -------------------------------------------------------- 0.55s 2025-11-08 13:39:34.162429 | orchestrator | Add known partitions to the list of available block devices ------------- 0.54s 2025-11-08 13:39:34.162435 | orchestrator | Add known partitions to the list of available block devices ------------- 0.53s 2025-11-08 13:39:34.162441 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.51s 2025-11-08 13:39:34.162446 | orchestrator | Set DB devices config data ---------------------------------------------- 0.51s 2025-11-08 13:39:56.781148 | orchestrator | 2025-11-08 13:39:56 | INFO  | Task 55f4debd-ec5e-4193-b181-c6e56d61eaad (sync inventory) is running in background. Output coming soon. 2025-11-08 13:40:23.677945 | orchestrator | 2025-11-08 13:39:58 | INFO  | Starting group_vars file reorganization 2025-11-08 13:40:23.678116 | orchestrator | 2025-11-08 13:39:58 | INFO  | Moved 0 file(s) to their respective directories 2025-11-08 13:40:23.678135 | orchestrator | 2025-11-08 13:39:58 | INFO  | Group_vars file reorganization completed 2025-11-08 13:40:23.678147 | orchestrator | 2025-11-08 13:40:01 | INFO  | Starting variable preparation from inventory 2025-11-08 13:40:23.678159 | orchestrator | 2025-11-08 13:40:04 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-11-08 13:40:23.678170 | orchestrator | 2025-11-08 13:40:04 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-11-08 13:40:23.678182 | orchestrator | 2025-11-08 13:40:04 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-11-08 13:40:23.678198 | orchestrator | 2025-11-08 13:40:04 | INFO  | 3 file(s) written, 6 host(s) processed 2025-11-08 13:40:23.678219 | orchestrator | 2025-11-08 13:40:04 | INFO  | Variable preparation completed 2025-11-08 13:40:23.678239 | orchestrator | 2025-11-08 13:40:05 | INFO  | Starting inventory overwrite handling 2025-11-08 13:40:23.678259 | orchestrator | 2025-11-08 13:40:05 | INFO  | Handling group overwrites in 99-overwrite 2025-11-08 13:40:23.678277 | orchestrator | 2025-11-08 13:40:05 | INFO  | Removing group frr:children from 60-generic 2025-11-08 13:40:23.678296 | orchestrator | 2025-11-08 13:40:05 | INFO  | Removing group storage:children from 50-kolla 2025-11-08 13:40:23.678314 | orchestrator | 2025-11-08 13:40:05 | INFO  | Removing group netbird:children from 50-infrastructure 2025-11-08 13:40:23.678344 | orchestrator | 2025-11-08 13:40:05 | INFO  | Removing group ceph-rgw from 50-ceph 2025-11-08 13:40:23.678363 | orchestrator | 2025-11-08 13:40:05 | INFO  | Removing group ceph-mds from 50-ceph 2025-11-08 13:40:23.678415 | orchestrator | 2025-11-08 13:40:05 | INFO  | Handling group overwrites in 20-roles 2025-11-08 13:40:23.678428 | orchestrator | 2025-11-08 13:40:05 | INFO  | Removing group k3s_node from 50-infrastructure 2025-11-08 13:40:23.678439 | orchestrator | 2025-11-08 13:40:05 | INFO  | Removed 6 group(s) in total 2025-11-08 13:40:23.678450 | orchestrator | 2025-11-08 13:40:05 | INFO  | Inventory overwrite handling completed 2025-11-08 13:40:23.678461 | orchestrator | 2025-11-08 13:40:06 | INFO  | Starting merge of inventory files 2025-11-08 13:40:23.678474 | orchestrator | 2025-11-08 13:40:06 | INFO  | Inventory files merged successfully 2025-11-08 13:40:23.678486 | orchestrator | 2025-11-08 13:40:11 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-11-08 13:40:23.678498 | orchestrator | 2025-11-08 13:40:22 | INFO  | Successfully wrote ClusterShell configuration 2025-11-08 13:40:23.678511 | orchestrator | [master 6aa8a69] 2025-11-08-13-40 2025-11-08 13:40:23.678524 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-11-08 13:40:26.101411 | orchestrator | 2025-11-08 13:40:26 | INFO  | Task a62b09ea-c0cb-494b-8c76-ed48bbe6c3d0 (ceph-create-lvm-devices) was prepared for execution. 2025-11-08 13:40:26.101493 | orchestrator | 2025-11-08 13:40:26 | INFO  | It takes a moment until task a62b09ea-c0cb-494b-8c76-ed48bbe6c3d0 (ceph-create-lvm-devices) has been started and output is visible here. 2025-11-08 13:40:37.235940 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-08 13:40:37.236083 | orchestrator | 2.16.14 2025-11-08 13:40:37.236106 | orchestrator | 2025-11-08 13:40:37.236123 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-08 13:40:37.236139 | orchestrator | 2025-11-08 13:40:37.236153 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-08 13:40:37.236168 | orchestrator | Saturday 08 November 2025 13:40:29 +0000 (0:00:00.239) 0:00:00.240 ***** 2025-11-08 13:40:37.236184 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 13:40:37.236199 | orchestrator | 2025-11-08 13:40:37.236214 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-08 13:40:37.236228 | orchestrator | Saturday 08 November 2025 13:40:30 +0000 (0:00:00.254) 0:00:00.494 ***** 2025-11-08 13:40:37.236242 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:40:37.236256 | orchestrator | 2025-11-08 13:40:37.236271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.236286 | orchestrator | Saturday 08 November 2025 13:40:30 +0000 (0:00:00.243) 0:00:00.737 ***** 2025-11-08 13:40:37.236303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-11-08 13:40:37.236319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-11-08 13:40:37.236335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-11-08 13:40:37.236351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-11-08 13:40:37.236367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-11-08 13:40:37.236383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-11-08 13:40:37.236399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-11-08 13:40:37.236415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-11-08 13:40:37.236458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-11-08 13:40:37.236475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-11-08 13:40:37.236491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-11-08 13:40:37.236539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-11-08 13:40:37.236556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-11-08 13:40:37.236571 | orchestrator | 2025-11-08 13:40:37.236588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.236604 | orchestrator | Saturday 08 November 2025 13:40:30 +0000 (0:00:00.526) 0:00:01.264 ***** 2025-11-08 13:40:37.236620 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.236635 | orchestrator | 2025-11-08 13:40:37.236650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.236664 | orchestrator | Saturday 08 November 2025 13:40:31 +0000 (0:00:00.172) 0:00:01.436 ***** 2025-11-08 13:40:37.236714 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.236731 | orchestrator | 2025-11-08 13:40:37.236746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.236763 | orchestrator | Saturday 08 November 2025 13:40:31 +0000 (0:00:00.175) 0:00:01.612 ***** 2025-11-08 13:40:37.236777 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.236791 | orchestrator | 2025-11-08 13:40:37.236803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.236816 | orchestrator | Saturday 08 November 2025 13:40:31 +0000 (0:00:00.173) 0:00:01.786 ***** 2025-11-08 13:40:37.236828 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.236843 | orchestrator | 2025-11-08 13:40:37.236858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.236873 | orchestrator | Saturday 08 November 2025 13:40:31 +0000 (0:00:00.184) 0:00:01.970 ***** 2025-11-08 13:40:37.236887 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.236901 | orchestrator | 2025-11-08 13:40:37.236915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.236926 | orchestrator | Saturday 08 November 2025 13:40:31 +0000 (0:00:00.186) 0:00:02.156 ***** 2025-11-08 13:40:37.236935 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.236943 | orchestrator | 2025-11-08 13:40:37.236952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.236960 | orchestrator | Saturday 08 November 2025 13:40:31 +0000 (0:00:00.193) 0:00:02.349 ***** 2025-11-08 13:40:37.236969 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.236977 | orchestrator | 2025-11-08 13:40:37.236986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.236994 | orchestrator | Saturday 08 November 2025 13:40:32 +0000 (0:00:00.202) 0:00:02.552 ***** 2025-11-08 13:40:37.237002 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.237011 | orchestrator | 2025-11-08 13:40:37.237019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.237028 | orchestrator | Saturday 08 November 2025 13:40:32 +0000 (0:00:00.206) 0:00:02.759 ***** 2025-11-08 13:40:37.237037 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8) 2025-11-08 13:40:37.237048 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8) 2025-11-08 13:40:37.237057 | orchestrator | 2025-11-08 13:40:37.237065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.237098 | orchestrator | Saturday 08 November 2025 13:40:32 +0000 (0:00:00.385) 0:00:03.144 ***** 2025-11-08 13:40:37.237107 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20) 2025-11-08 13:40:37.237116 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20) 2025-11-08 13:40:37.237125 | orchestrator | 2025-11-08 13:40:37.237133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.237142 | orchestrator | Saturday 08 November 2025 13:40:33 +0000 (0:00:00.547) 0:00:03.691 ***** 2025-11-08 13:40:37.237162 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f) 2025-11-08 13:40:37.237171 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f) 2025-11-08 13:40:37.237179 | orchestrator | 2025-11-08 13:40:37.237188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.237196 | orchestrator | Saturday 08 November 2025 13:40:33 +0000 (0:00:00.560) 0:00:04.252 ***** 2025-11-08 13:40:37.237205 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b) 2025-11-08 13:40:37.237214 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b) 2025-11-08 13:40:37.237222 | orchestrator | 2025-11-08 13:40:37.237231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:37.237239 | orchestrator | Saturday 08 November 2025 13:40:34 +0000 (0:00:01.020) 0:00:05.272 ***** 2025-11-08 13:40:37.237248 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-08 13:40:37.237256 | orchestrator | 2025-11-08 13:40:37.237265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:37.237273 | orchestrator | Saturday 08 November 2025 13:40:35 +0000 (0:00:00.396) 0:00:05.668 ***** 2025-11-08 13:40:37.237282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-11-08 13:40:37.237290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-11-08 13:40:37.237299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-11-08 13:40:37.237307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-11-08 13:40:37.237316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-11-08 13:40:37.237324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-11-08 13:40:37.237333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-11-08 13:40:37.237341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-11-08 13:40:37.237350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-11-08 13:40:37.237359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-11-08 13:40:37.237368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-11-08 13:40:37.237376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-11-08 13:40:37.237385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-11-08 13:40:37.237393 | orchestrator | 2025-11-08 13:40:37.237402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:37.237410 | orchestrator | Saturday 08 November 2025 13:40:35 +0000 (0:00:00.498) 0:00:06.167 ***** 2025-11-08 13:40:37.237419 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.237427 | orchestrator | 2025-11-08 13:40:37.237436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:37.237445 | orchestrator | Saturday 08 November 2025 13:40:35 +0000 (0:00:00.198) 0:00:06.365 ***** 2025-11-08 13:40:37.237453 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.237461 | orchestrator | 2025-11-08 13:40:37.237470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:37.237479 | orchestrator | Saturday 08 November 2025 13:40:36 +0000 (0:00:00.201) 0:00:06.567 ***** 2025-11-08 13:40:37.237487 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.237496 | orchestrator | 2025-11-08 13:40:37.237504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:37.237519 | orchestrator | Saturday 08 November 2025 13:40:36 +0000 (0:00:00.196) 0:00:06.763 ***** 2025-11-08 13:40:37.237528 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.237536 | orchestrator | 2025-11-08 13:40:37.237545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:37.237553 | orchestrator | Saturday 08 November 2025 13:40:36 +0000 (0:00:00.200) 0:00:06.964 ***** 2025-11-08 13:40:37.237562 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.237570 | orchestrator | 2025-11-08 13:40:37.237579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:37.237587 | orchestrator | Saturday 08 November 2025 13:40:36 +0000 (0:00:00.190) 0:00:07.154 ***** 2025-11-08 13:40:37.237596 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.237604 | orchestrator | 2025-11-08 13:40:37.237612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:37.237621 | orchestrator | Saturday 08 November 2025 13:40:36 +0000 (0:00:00.227) 0:00:07.382 ***** 2025-11-08 13:40:37.237629 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:37.237638 | orchestrator | 2025-11-08 13:40:37.237651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:45.594629 | orchestrator | Saturday 08 November 2025 13:40:37 +0000 (0:00:00.245) 0:00:07.627 ***** 2025-11-08 13:40:45.594770 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.594788 | orchestrator | 2025-11-08 13:40:45.594801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:45.594813 | orchestrator | Saturday 08 November 2025 13:40:37 +0000 (0:00:00.226) 0:00:07.854 ***** 2025-11-08 13:40:45.594824 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-11-08 13:40:45.594835 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-11-08 13:40:45.594847 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-11-08 13:40:45.594857 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-11-08 13:40:45.594868 | orchestrator | 2025-11-08 13:40:45.594879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:45.594890 | orchestrator | Saturday 08 November 2025 13:40:38 +0000 (0:00:01.223) 0:00:09.077 ***** 2025-11-08 13:40:45.594901 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.594912 | orchestrator | 2025-11-08 13:40:45.594923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:45.594933 | orchestrator | Saturday 08 November 2025 13:40:38 +0000 (0:00:00.251) 0:00:09.329 ***** 2025-11-08 13:40:45.594944 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.594955 | orchestrator | 2025-11-08 13:40:45.594966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:45.594977 | orchestrator | Saturday 08 November 2025 13:40:39 +0000 (0:00:00.231) 0:00:09.561 ***** 2025-11-08 13:40:45.594988 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.594999 | orchestrator | 2025-11-08 13:40:45.595009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:40:45.595020 | orchestrator | Saturday 08 November 2025 13:40:39 +0000 (0:00:00.225) 0:00:09.787 ***** 2025-11-08 13:40:45.595031 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595041 | orchestrator | 2025-11-08 13:40:45.595052 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-08 13:40:45.595063 | orchestrator | Saturday 08 November 2025 13:40:39 +0000 (0:00:00.235) 0:00:10.022 ***** 2025-11-08 13:40:45.595073 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595084 | orchestrator | 2025-11-08 13:40:45.595095 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-08 13:40:45.595105 | orchestrator | Saturday 08 November 2025 13:40:39 +0000 (0:00:00.149) 0:00:10.172 ***** 2025-11-08 13:40:45.595131 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd56445f-4803-5564-bbe6-d923870c576d'}}) 2025-11-08 13:40:45.595143 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c507e483-80d4-5110-a9ba-f918053b344b'}}) 2025-11-08 13:40:45.595176 | orchestrator | 2025-11-08 13:40:45.595190 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-08 13:40:45.595203 | orchestrator | Saturday 08 November 2025 13:40:39 +0000 (0:00:00.218) 0:00:10.390 ***** 2025-11-08 13:40:45.595216 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'}) 2025-11-08 13:40:45.595235 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'}) 2025-11-08 13:40:45.595247 | orchestrator | 2025-11-08 13:40:45.595259 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-08 13:40:45.595271 | orchestrator | Saturday 08 November 2025 13:40:42 +0000 (0:00:02.081) 0:00:12.471 ***** 2025-11-08 13:40:45.595283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:45.595297 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:45.595309 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595321 | orchestrator | 2025-11-08 13:40:45.595333 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-08 13:40:45.595345 | orchestrator | Saturday 08 November 2025 13:40:42 +0000 (0:00:00.162) 0:00:12.634 ***** 2025-11-08 13:40:45.595357 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'}) 2025-11-08 13:40:45.595369 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'}) 2025-11-08 13:40:45.595382 | orchestrator | 2025-11-08 13:40:45.595394 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-08 13:40:45.595406 | orchestrator | Saturday 08 November 2025 13:40:43 +0000 (0:00:01.499) 0:00:14.133 ***** 2025-11-08 13:40:45.595418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:45.595430 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:45.595442 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595454 | orchestrator | 2025-11-08 13:40:45.595466 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-08 13:40:45.595478 | orchestrator | Saturday 08 November 2025 13:40:43 +0000 (0:00:00.163) 0:00:14.296 ***** 2025-11-08 13:40:45.595507 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595520 | orchestrator | 2025-11-08 13:40:45.595531 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-08 13:40:45.595541 | orchestrator | Saturday 08 November 2025 13:40:43 +0000 (0:00:00.104) 0:00:14.401 ***** 2025-11-08 13:40:45.595552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:45.595562 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:45.595573 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595584 | orchestrator | 2025-11-08 13:40:45.595594 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-08 13:40:45.595605 | orchestrator | Saturday 08 November 2025 13:40:44 +0000 (0:00:00.387) 0:00:14.789 ***** 2025-11-08 13:40:45.595615 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595626 | orchestrator | 2025-11-08 13:40:45.595651 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-08 13:40:45.595662 | orchestrator | Saturday 08 November 2025 13:40:44 +0000 (0:00:00.133) 0:00:14.922 ***** 2025-11-08 13:40:45.595672 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:45.595716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:45.595727 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595738 | orchestrator | 2025-11-08 13:40:45.595748 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-08 13:40:45.595759 | orchestrator | Saturday 08 November 2025 13:40:44 +0000 (0:00:00.144) 0:00:15.067 ***** 2025-11-08 13:40:45.595769 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595780 | orchestrator | 2025-11-08 13:40:45.595790 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-08 13:40:45.595801 | orchestrator | Saturday 08 November 2025 13:40:44 +0000 (0:00:00.123) 0:00:15.191 ***** 2025-11-08 13:40:45.595811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:45.595822 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:45.595832 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595843 | orchestrator | 2025-11-08 13:40:45.595853 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-08 13:40:45.595864 | orchestrator | Saturday 08 November 2025 13:40:44 +0000 (0:00:00.130) 0:00:15.321 ***** 2025-11-08 13:40:45.595874 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:40:45.595885 | orchestrator | 2025-11-08 13:40:45.595896 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-08 13:40:45.595912 | orchestrator | Saturday 08 November 2025 13:40:45 +0000 (0:00:00.136) 0:00:15.458 ***** 2025-11-08 13:40:45.595922 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:45.595933 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:45.595944 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.595954 | orchestrator | 2025-11-08 13:40:45.595964 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-08 13:40:45.595975 | orchestrator | Saturday 08 November 2025 13:40:45 +0000 (0:00:00.165) 0:00:15.623 ***** 2025-11-08 13:40:45.595986 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:45.595997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:45.596007 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.596018 | orchestrator | 2025-11-08 13:40:45.596028 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-08 13:40:45.596039 | orchestrator | Saturday 08 November 2025 13:40:45 +0000 (0:00:00.125) 0:00:15.749 ***** 2025-11-08 13:40:45.596050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:45.596060 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:45.596071 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.596088 | orchestrator | 2025-11-08 13:40:45.596099 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-08 13:40:45.596109 | orchestrator | Saturday 08 November 2025 13:40:45 +0000 (0:00:00.130) 0:00:15.880 ***** 2025-11-08 13:40:45.596120 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:45.596130 | orchestrator | 2025-11-08 13:40:45.596141 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-08 13:40:45.596158 | orchestrator | Saturday 08 November 2025 13:40:45 +0000 (0:00:00.111) 0:00:15.992 ***** 2025-11-08 13:40:52.060158 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.060276 | orchestrator | 2025-11-08 13:40:52.060294 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-08 13:40:52.060307 | orchestrator | Saturday 08 November 2025 13:40:45 +0000 (0:00:00.139) 0:00:16.131 ***** 2025-11-08 13:40:52.060319 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.060330 | orchestrator | 2025-11-08 13:40:52.060342 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-08 13:40:52.060352 | orchestrator | Saturday 08 November 2025 13:40:45 +0000 (0:00:00.160) 0:00:16.291 ***** 2025-11-08 13:40:52.060363 | orchestrator | ok: [testbed-node-3] => { 2025-11-08 13:40:52.060375 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-08 13:40:52.060386 | orchestrator | } 2025-11-08 13:40:52.060397 | orchestrator | 2025-11-08 13:40:52.060408 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-08 13:40:52.060419 | orchestrator | Saturday 08 November 2025 13:40:46 +0000 (0:00:00.265) 0:00:16.557 ***** 2025-11-08 13:40:52.060430 | orchestrator | ok: [testbed-node-3] => { 2025-11-08 13:40:52.060441 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-08 13:40:52.060451 | orchestrator | } 2025-11-08 13:40:52.060462 | orchestrator | 2025-11-08 13:40:52.060473 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-08 13:40:52.060485 | orchestrator | Saturday 08 November 2025 13:40:46 +0000 (0:00:00.134) 0:00:16.691 ***** 2025-11-08 13:40:52.060496 | orchestrator | ok: [testbed-node-3] => { 2025-11-08 13:40:52.060507 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-08 13:40:52.060518 | orchestrator | } 2025-11-08 13:40:52.060529 | orchestrator | 2025-11-08 13:40:52.060540 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-08 13:40:52.060551 | orchestrator | Saturday 08 November 2025 13:40:46 +0000 (0:00:00.116) 0:00:16.808 ***** 2025-11-08 13:40:52.060561 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:40:52.060572 | orchestrator | 2025-11-08 13:40:52.060583 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-08 13:40:52.060594 | orchestrator | Saturday 08 November 2025 13:40:47 +0000 (0:00:00.638) 0:00:17.447 ***** 2025-11-08 13:40:52.060604 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:40:52.060615 | orchestrator | 2025-11-08 13:40:52.060626 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-08 13:40:52.060637 | orchestrator | Saturday 08 November 2025 13:40:47 +0000 (0:00:00.507) 0:00:17.954 ***** 2025-11-08 13:40:52.060647 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:40:52.060658 | orchestrator | 2025-11-08 13:40:52.060671 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-08 13:40:52.060713 | orchestrator | Saturday 08 November 2025 13:40:48 +0000 (0:00:00.536) 0:00:18.490 ***** 2025-11-08 13:40:52.060726 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:40:52.060739 | orchestrator | 2025-11-08 13:40:52.060751 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-08 13:40:52.060764 | orchestrator | Saturday 08 November 2025 13:40:48 +0000 (0:00:00.152) 0:00:18.643 ***** 2025-11-08 13:40:52.060776 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.060788 | orchestrator | 2025-11-08 13:40:52.060801 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-08 13:40:52.060813 | orchestrator | Saturday 08 November 2025 13:40:48 +0000 (0:00:00.126) 0:00:18.769 ***** 2025-11-08 13:40:52.060848 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.060860 | orchestrator | 2025-11-08 13:40:52.060873 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-08 13:40:52.060885 | orchestrator | Saturday 08 November 2025 13:40:48 +0000 (0:00:00.113) 0:00:18.882 ***** 2025-11-08 13:40:52.060898 | orchestrator | ok: [testbed-node-3] => { 2025-11-08 13:40:52.060911 | orchestrator |  "vgs_report": { 2025-11-08 13:40:52.060924 | orchestrator |  "vg": [] 2025-11-08 13:40:52.060936 | orchestrator |  } 2025-11-08 13:40:52.060949 | orchestrator | } 2025-11-08 13:40:52.060961 | orchestrator | 2025-11-08 13:40:52.060973 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-08 13:40:52.060986 | orchestrator | Saturday 08 November 2025 13:40:48 +0000 (0:00:00.154) 0:00:19.037 ***** 2025-11-08 13:40:52.060999 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061011 | orchestrator | 2025-11-08 13:40:52.061023 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-08 13:40:52.061054 | orchestrator | Saturday 08 November 2025 13:40:48 +0000 (0:00:00.151) 0:00:19.188 ***** 2025-11-08 13:40:52.061067 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061080 | orchestrator | 2025-11-08 13:40:52.061093 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-08 13:40:52.061104 | orchestrator | Saturday 08 November 2025 13:40:48 +0000 (0:00:00.147) 0:00:19.335 ***** 2025-11-08 13:40:52.061114 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061125 | orchestrator | 2025-11-08 13:40:52.061135 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-08 13:40:52.061243 | orchestrator | Saturday 08 November 2025 13:40:49 +0000 (0:00:00.359) 0:00:19.695 ***** 2025-11-08 13:40:52.061255 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061266 | orchestrator | 2025-11-08 13:40:52.061277 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-08 13:40:52.061288 | orchestrator | Saturday 08 November 2025 13:40:49 +0000 (0:00:00.151) 0:00:19.846 ***** 2025-11-08 13:40:52.061299 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061310 | orchestrator | 2025-11-08 13:40:52.061321 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-08 13:40:52.061331 | orchestrator | Saturday 08 November 2025 13:40:49 +0000 (0:00:00.156) 0:00:20.003 ***** 2025-11-08 13:40:52.061350 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061368 | orchestrator | 2025-11-08 13:40:52.061389 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-08 13:40:52.061409 | orchestrator | Saturday 08 November 2025 13:40:49 +0000 (0:00:00.144) 0:00:20.147 ***** 2025-11-08 13:40:52.061428 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061449 | orchestrator | 2025-11-08 13:40:52.061469 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-08 13:40:52.061487 | orchestrator | Saturday 08 November 2025 13:40:49 +0000 (0:00:00.150) 0:00:20.298 ***** 2025-11-08 13:40:52.061519 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061530 | orchestrator | 2025-11-08 13:40:52.061541 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-08 13:40:52.061552 | orchestrator | Saturday 08 November 2025 13:40:50 +0000 (0:00:00.144) 0:00:20.443 ***** 2025-11-08 13:40:52.061563 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061573 | orchestrator | 2025-11-08 13:40:52.061584 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-08 13:40:52.061594 | orchestrator | Saturday 08 November 2025 13:40:50 +0000 (0:00:00.136) 0:00:20.579 ***** 2025-11-08 13:40:52.061613 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061632 | orchestrator | 2025-11-08 13:40:52.061650 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-08 13:40:52.061665 | orchestrator | Saturday 08 November 2025 13:40:50 +0000 (0:00:00.136) 0:00:20.716 ***** 2025-11-08 13:40:52.061677 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061713 | orchestrator | 2025-11-08 13:40:52.061734 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-08 13:40:52.061745 | orchestrator | Saturday 08 November 2025 13:40:50 +0000 (0:00:00.148) 0:00:20.865 ***** 2025-11-08 13:40:52.061756 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061767 | orchestrator | 2025-11-08 13:40:52.061778 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-08 13:40:52.061788 | orchestrator | Saturday 08 November 2025 13:40:50 +0000 (0:00:00.115) 0:00:20.980 ***** 2025-11-08 13:40:52.061802 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061819 | orchestrator | 2025-11-08 13:40:52.061835 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-08 13:40:52.061855 | orchestrator | Saturday 08 November 2025 13:40:50 +0000 (0:00:00.142) 0:00:21.123 ***** 2025-11-08 13:40:52.061873 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.061891 | orchestrator | 2025-11-08 13:40:52.061909 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-08 13:40:52.061929 | orchestrator | Saturday 08 November 2025 13:40:50 +0000 (0:00:00.138) 0:00:21.262 ***** 2025-11-08 13:40:52.061949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:52.061970 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:52.061989 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.062007 | orchestrator | 2025-11-08 13:40:52.062116 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-08 13:40:52.062136 | orchestrator | Saturday 08 November 2025 13:40:51 +0000 (0:00:00.355) 0:00:21.617 ***** 2025-11-08 13:40:52.062195 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:52.062208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:52.062227 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.062238 | orchestrator | 2025-11-08 13:40:52.062249 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-08 13:40:52.062260 | orchestrator | Saturday 08 November 2025 13:40:51 +0000 (0:00:00.172) 0:00:21.790 ***** 2025-11-08 13:40:52.062271 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:52.062282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:52.062293 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.062304 | orchestrator | 2025-11-08 13:40:52.062314 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-08 13:40:52.062325 | orchestrator | Saturday 08 November 2025 13:40:51 +0000 (0:00:00.170) 0:00:21.961 ***** 2025-11-08 13:40:52.062336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:52.062347 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:52.062358 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.062368 | orchestrator | 2025-11-08 13:40:52.062379 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-08 13:40:52.062390 | orchestrator | Saturday 08 November 2025 13:40:51 +0000 (0:00:00.163) 0:00:22.125 ***** 2025-11-08 13:40:52.062401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:52.062420 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:52.062431 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:52.062442 | orchestrator | 2025-11-08 13:40:52.062453 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-08 13:40:52.062463 | orchestrator | Saturday 08 November 2025 13:40:51 +0000 (0:00:00.167) 0:00:22.292 ***** 2025-11-08 13:40:52.062485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:57.086121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:57.086225 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:57.086241 | orchestrator | 2025-11-08 13:40:57.086253 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-08 13:40:57.086266 | orchestrator | Saturday 08 November 2025 13:40:52 +0000 (0:00:00.164) 0:00:22.457 ***** 2025-11-08 13:40:57.086278 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:57.086289 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:57.086300 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:57.086311 | orchestrator | 2025-11-08 13:40:57.086322 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-08 13:40:57.086333 | orchestrator | Saturday 08 November 2025 13:40:52 +0000 (0:00:00.169) 0:00:22.626 ***** 2025-11-08 13:40:57.086344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:57.086355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:57.086366 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:57.086377 | orchestrator | 2025-11-08 13:40:57.086388 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-08 13:40:57.086399 | orchestrator | Saturday 08 November 2025 13:40:52 +0000 (0:00:00.145) 0:00:22.772 ***** 2025-11-08 13:40:57.086410 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:40:57.086421 | orchestrator | 2025-11-08 13:40:57.086432 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-08 13:40:57.086443 | orchestrator | Saturday 08 November 2025 13:40:52 +0000 (0:00:00.497) 0:00:23.270 ***** 2025-11-08 13:40:57.086454 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:40:57.086464 | orchestrator | 2025-11-08 13:40:57.086475 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-08 13:40:57.086486 | orchestrator | Saturday 08 November 2025 13:40:53 +0000 (0:00:00.500) 0:00:23.770 ***** 2025-11-08 13:40:57.086497 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:40:57.086508 | orchestrator | 2025-11-08 13:40:57.086518 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-08 13:40:57.086529 | orchestrator | Saturday 08 November 2025 13:40:53 +0000 (0:00:00.152) 0:00:23.923 ***** 2025-11-08 13:40:57.086540 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'vg_name': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'}) 2025-11-08 13:40:57.086553 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'vg_name': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'}) 2025-11-08 13:40:57.086564 | orchestrator | 2025-11-08 13:40:57.086575 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-08 13:40:57.086611 | orchestrator | Saturday 08 November 2025 13:40:53 +0000 (0:00:00.202) 0:00:24.126 ***** 2025-11-08 13:40:57.086625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:57.086638 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:57.086651 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:57.086664 | orchestrator | 2025-11-08 13:40:57.086676 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-08 13:40:57.086721 | orchestrator | Saturday 08 November 2025 13:40:54 +0000 (0:00:00.466) 0:00:24.592 ***** 2025-11-08 13:40:57.086734 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:57.086747 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:57.086761 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:57.086773 | orchestrator | 2025-11-08 13:40:57.086786 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-08 13:40:57.086798 | orchestrator | Saturday 08 November 2025 13:40:54 +0000 (0:00:00.181) 0:00:24.773 ***** 2025-11-08 13:40:57.086812 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'})  2025-11-08 13:40:57.086824 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'})  2025-11-08 13:40:57.086837 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:40:57.086849 | orchestrator | 2025-11-08 13:40:57.086862 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-08 13:40:57.086874 | orchestrator | Saturday 08 November 2025 13:40:54 +0000 (0:00:00.163) 0:00:24.937 ***** 2025-11-08 13:40:57.086903 | orchestrator | ok: [testbed-node-3] => { 2025-11-08 13:40:57.086915 | orchestrator |  "lvm_report": { 2025-11-08 13:40:57.086926 | orchestrator |  "lv": [ 2025-11-08 13:40:57.086937 | orchestrator |  { 2025-11-08 13:40:57.086948 | orchestrator |  "lv_name": "osd-block-c507e483-80d4-5110-a9ba-f918053b344b", 2025-11-08 13:40:57.086960 | orchestrator |  "vg_name": "ceph-c507e483-80d4-5110-a9ba-f918053b344b" 2025-11-08 13:40:57.086971 | orchestrator |  }, 2025-11-08 13:40:57.086981 | orchestrator |  { 2025-11-08 13:40:57.086992 | orchestrator |  "lv_name": "osd-block-cd56445f-4803-5564-bbe6-d923870c576d", 2025-11-08 13:40:57.087003 | orchestrator |  "vg_name": "ceph-cd56445f-4803-5564-bbe6-d923870c576d" 2025-11-08 13:40:57.087014 | orchestrator |  } 2025-11-08 13:40:57.087025 | orchestrator |  ], 2025-11-08 13:40:57.087036 | orchestrator |  "pv": [ 2025-11-08 13:40:57.087046 | orchestrator |  { 2025-11-08 13:40:57.087057 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-08 13:40:57.087068 | orchestrator |  "vg_name": "ceph-cd56445f-4803-5564-bbe6-d923870c576d" 2025-11-08 13:40:57.087079 | orchestrator |  }, 2025-11-08 13:40:57.087089 | orchestrator |  { 2025-11-08 13:40:57.087100 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-08 13:40:57.087111 | orchestrator |  "vg_name": "ceph-c507e483-80d4-5110-a9ba-f918053b344b" 2025-11-08 13:40:57.087122 | orchestrator |  } 2025-11-08 13:40:57.087150 | orchestrator |  ] 2025-11-08 13:40:57.087161 | orchestrator |  } 2025-11-08 13:40:57.087172 | orchestrator | } 2025-11-08 13:40:57.087184 | orchestrator | 2025-11-08 13:40:57.087195 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-08 13:40:57.087213 | orchestrator | 2025-11-08 13:40:57.087224 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-08 13:40:57.087235 | orchestrator | Saturday 08 November 2025 13:40:54 +0000 (0:00:00.328) 0:00:25.265 ***** 2025-11-08 13:40:57.087246 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-08 13:40:57.087257 | orchestrator | 2025-11-08 13:40:57.087268 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-08 13:40:57.087279 | orchestrator | Saturday 08 November 2025 13:40:55 +0000 (0:00:00.240) 0:00:25.506 ***** 2025-11-08 13:40:57.087290 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:40:57.087300 | orchestrator | 2025-11-08 13:40:57.087311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:57.087322 | orchestrator | Saturday 08 November 2025 13:40:55 +0000 (0:00:00.203) 0:00:25.710 ***** 2025-11-08 13:40:57.087333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-11-08 13:40:57.087344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-11-08 13:40:57.087354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-11-08 13:40:57.087419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-11-08 13:40:57.087430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-11-08 13:40:57.087446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-11-08 13:40:57.087458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-11-08 13:40:57.087468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-11-08 13:40:57.087479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-11-08 13:40:57.087490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-11-08 13:40:57.087501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-11-08 13:40:57.087512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-11-08 13:40:57.087523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-11-08 13:40:57.087534 | orchestrator | 2025-11-08 13:40:57.087544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:57.087555 | orchestrator | Saturday 08 November 2025 13:40:55 +0000 (0:00:00.379) 0:00:26.090 ***** 2025-11-08 13:40:57.087566 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:40:57.087577 | orchestrator | 2025-11-08 13:40:57.087588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:57.087599 | orchestrator | Saturday 08 November 2025 13:40:55 +0000 (0:00:00.185) 0:00:26.275 ***** 2025-11-08 13:40:57.087610 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:40:57.087620 | orchestrator | 2025-11-08 13:40:57.087631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:57.087642 | orchestrator | Saturday 08 November 2025 13:40:56 +0000 (0:00:00.184) 0:00:26.460 ***** 2025-11-08 13:40:57.087653 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:40:57.087664 | orchestrator | 2025-11-08 13:40:57.087674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:57.087704 | orchestrator | Saturday 08 November 2025 13:40:56 +0000 (0:00:00.458) 0:00:26.918 ***** 2025-11-08 13:40:57.087715 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:40:57.087726 | orchestrator | 2025-11-08 13:40:57.087737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:57.087748 | orchestrator | Saturday 08 November 2025 13:40:56 +0000 (0:00:00.181) 0:00:27.100 ***** 2025-11-08 13:40:57.087758 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:40:57.087777 | orchestrator | 2025-11-08 13:40:57.087788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:40:57.087799 | orchestrator | Saturday 08 November 2025 13:40:56 +0000 (0:00:00.204) 0:00:27.305 ***** 2025-11-08 13:40:57.087809 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:40:57.087820 | orchestrator | 2025-11-08 13:40:57.087840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:07.002866 | orchestrator | Saturday 08 November 2025 13:40:57 +0000 (0:00:00.176) 0:00:27.481 ***** 2025-11-08 13:41:07.002947 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.002954 | orchestrator | 2025-11-08 13:41:07.002960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:07.002964 | orchestrator | Saturday 08 November 2025 13:40:57 +0000 (0:00:00.173) 0:00:27.655 ***** 2025-11-08 13:41:07.002968 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.002972 | orchestrator | 2025-11-08 13:41:07.002976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:07.002981 | orchestrator | Saturday 08 November 2025 13:40:57 +0000 (0:00:00.186) 0:00:27.841 ***** 2025-11-08 13:41:07.002985 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d) 2025-11-08 13:41:07.002990 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d) 2025-11-08 13:41:07.002994 | orchestrator | 2025-11-08 13:41:07.002998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:07.003001 | orchestrator | Saturday 08 November 2025 13:40:57 +0000 (0:00:00.379) 0:00:28.220 ***** 2025-11-08 13:41:07.003005 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb) 2025-11-08 13:41:07.003009 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb) 2025-11-08 13:41:07.003013 | orchestrator | 2025-11-08 13:41:07.003017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:07.003021 | orchestrator | Saturday 08 November 2025 13:40:58 +0000 (0:00:00.334) 0:00:28.555 ***** 2025-11-08 13:41:07.003024 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c) 2025-11-08 13:41:07.003028 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c) 2025-11-08 13:41:07.003032 | orchestrator | 2025-11-08 13:41:07.003036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:07.003040 | orchestrator | Saturday 08 November 2025 13:40:58 +0000 (0:00:00.346) 0:00:28.902 ***** 2025-11-08 13:41:07.003044 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d) 2025-11-08 13:41:07.003048 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d) 2025-11-08 13:41:07.003051 | orchestrator | 2025-11-08 13:41:07.003055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:07.003059 | orchestrator | Saturday 08 November 2025 13:40:58 +0000 (0:00:00.486) 0:00:29.388 ***** 2025-11-08 13:41:07.003063 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-08 13:41:07.003067 | orchestrator | 2025-11-08 13:41:07.003070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003086 | orchestrator | Saturday 08 November 2025 13:40:59 +0000 (0:00:00.444) 0:00:29.833 ***** 2025-11-08 13:41:07.003090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-11-08 13:41:07.003095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-11-08 13:41:07.003098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-11-08 13:41:07.003102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-11-08 13:41:07.003119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-11-08 13:41:07.003123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-11-08 13:41:07.003127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-11-08 13:41:07.003131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-11-08 13:41:07.003134 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-11-08 13:41:07.003138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-11-08 13:41:07.003142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-11-08 13:41:07.003146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-11-08 13:41:07.003149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-11-08 13:41:07.003153 | orchestrator | 2025-11-08 13:41:07.003157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003161 | orchestrator | Saturday 08 November 2025 13:41:00 +0000 (0:00:00.678) 0:00:30.512 ***** 2025-11-08 13:41:07.003164 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003169 | orchestrator | 2025-11-08 13:41:07.003172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003183 | orchestrator | Saturday 08 November 2025 13:41:00 +0000 (0:00:00.166) 0:00:30.678 ***** 2025-11-08 13:41:07.003187 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003191 | orchestrator | 2025-11-08 13:41:07.003195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003198 | orchestrator | Saturday 08 November 2025 13:41:00 +0000 (0:00:00.186) 0:00:30.865 ***** 2025-11-08 13:41:07.003202 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003206 | orchestrator | 2025-11-08 13:41:07.003219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003223 | orchestrator | Saturday 08 November 2025 13:41:00 +0000 (0:00:00.193) 0:00:31.059 ***** 2025-11-08 13:41:07.003227 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003230 | orchestrator | 2025-11-08 13:41:07.003234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003238 | orchestrator | Saturday 08 November 2025 13:41:00 +0000 (0:00:00.196) 0:00:31.255 ***** 2025-11-08 13:41:07.003241 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003245 | orchestrator | 2025-11-08 13:41:07.003249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003253 | orchestrator | Saturday 08 November 2025 13:41:01 +0000 (0:00:00.186) 0:00:31.442 ***** 2025-11-08 13:41:07.003256 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003260 | orchestrator | 2025-11-08 13:41:07.003264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003267 | orchestrator | Saturday 08 November 2025 13:41:01 +0000 (0:00:00.195) 0:00:31.637 ***** 2025-11-08 13:41:07.003271 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003275 | orchestrator | 2025-11-08 13:41:07.003278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003282 | orchestrator | Saturday 08 November 2025 13:41:01 +0000 (0:00:00.204) 0:00:31.842 ***** 2025-11-08 13:41:07.003286 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003290 | orchestrator | 2025-11-08 13:41:07.003293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003297 | orchestrator | Saturday 08 November 2025 13:41:01 +0000 (0:00:00.191) 0:00:32.033 ***** 2025-11-08 13:41:07.003301 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-11-08 13:41:07.003305 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-11-08 13:41:07.003310 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-11-08 13:41:07.003316 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-11-08 13:41:07.003320 | orchestrator | 2025-11-08 13:41:07.003324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003328 | orchestrator | Saturday 08 November 2025 13:41:02 +0000 (0:00:00.776) 0:00:32.809 ***** 2025-11-08 13:41:07.003331 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003335 | orchestrator | 2025-11-08 13:41:07.003339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003342 | orchestrator | Saturday 08 November 2025 13:41:02 +0000 (0:00:00.183) 0:00:32.993 ***** 2025-11-08 13:41:07.003346 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003350 | orchestrator | 2025-11-08 13:41:07.003353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003357 | orchestrator | Saturday 08 November 2025 13:41:03 +0000 (0:00:00.443) 0:00:33.436 ***** 2025-11-08 13:41:07.003361 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003365 | orchestrator | 2025-11-08 13:41:07.003368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:07.003372 | orchestrator | Saturday 08 November 2025 13:41:03 +0000 (0:00:00.184) 0:00:33.621 ***** 2025-11-08 13:41:07.003376 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003380 | orchestrator | 2025-11-08 13:41:07.003383 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-08 13:41:07.003387 | orchestrator | Saturday 08 November 2025 13:41:03 +0000 (0:00:00.184) 0:00:33.806 ***** 2025-11-08 13:41:07.003391 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003394 | orchestrator | 2025-11-08 13:41:07.003398 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-08 13:41:07.003402 | orchestrator | Saturday 08 November 2025 13:41:03 +0000 (0:00:00.135) 0:00:33.941 ***** 2025-11-08 13:41:07.003406 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f393addc-5b9a-54bf-a4a6-7d44f9449202'}}) 2025-11-08 13:41:07.003410 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '380ddcdc-ed2e-5f5e-8a3f-001787d903df'}}) 2025-11-08 13:41:07.003415 | orchestrator | 2025-11-08 13:41:07.003419 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-08 13:41:07.003423 | orchestrator | Saturday 08 November 2025 13:41:03 +0000 (0:00:00.163) 0:00:34.104 ***** 2025-11-08 13:41:07.003429 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'}) 2025-11-08 13:41:07.003434 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'}) 2025-11-08 13:41:07.003438 | orchestrator | 2025-11-08 13:41:07.003442 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-08 13:41:07.003447 | orchestrator | Saturday 08 November 2025 13:41:05 +0000 (0:00:01.816) 0:00:35.921 ***** 2025-11-08 13:41:07.003451 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:07.003457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:07.003461 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:07.003466 | orchestrator | 2025-11-08 13:41:07.003470 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-08 13:41:07.003474 | orchestrator | Saturday 08 November 2025 13:41:05 +0000 (0:00:00.180) 0:00:36.102 ***** 2025-11-08 13:41:07.003479 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'}) 2025-11-08 13:41:07.003486 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'}) 2025-11-08 13:41:12.515574 | orchestrator | 2025-11-08 13:41:12.515712 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-08 13:41:12.515729 | orchestrator | Saturday 08 November 2025 13:41:06 +0000 (0:00:01.293) 0:00:37.396 ***** 2025-11-08 13:41:12.515763 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:12.515777 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:12.515789 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.515801 | orchestrator | 2025-11-08 13:41:12.515813 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-08 13:41:12.515824 | orchestrator | Saturday 08 November 2025 13:41:07 +0000 (0:00:00.153) 0:00:37.549 ***** 2025-11-08 13:41:12.515835 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.515847 | orchestrator | 2025-11-08 13:41:12.515858 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-08 13:41:12.515868 | orchestrator | Saturday 08 November 2025 13:41:07 +0000 (0:00:00.136) 0:00:37.686 ***** 2025-11-08 13:41:12.515880 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:12.515891 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:12.515902 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.515912 | orchestrator | 2025-11-08 13:41:12.515923 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-08 13:41:12.515934 | orchestrator | Saturday 08 November 2025 13:41:07 +0000 (0:00:00.163) 0:00:37.849 ***** 2025-11-08 13:41:12.515945 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.515956 | orchestrator | 2025-11-08 13:41:12.515966 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-08 13:41:12.515977 | orchestrator | Saturday 08 November 2025 13:41:07 +0000 (0:00:00.143) 0:00:37.993 ***** 2025-11-08 13:41:12.515988 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:12.515999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:12.516015 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.516025 | orchestrator | 2025-11-08 13:41:12.516036 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-08 13:41:12.516047 | orchestrator | Saturday 08 November 2025 13:41:07 +0000 (0:00:00.345) 0:00:38.338 ***** 2025-11-08 13:41:12.516058 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.516069 | orchestrator | 2025-11-08 13:41:12.516079 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-08 13:41:12.516090 | orchestrator | Saturday 08 November 2025 13:41:08 +0000 (0:00:00.131) 0:00:38.469 ***** 2025-11-08 13:41:12.516103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:12.516116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:12.516128 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.516140 | orchestrator | 2025-11-08 13:41:12.516152 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-08 13:41:12.516164 | orchestrator | Saturday 08 November 2025 13:41:08 +0000 (0:00:00.162) 0:00:38.631 ***** 2025-11-08 13:41:12.516195 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:41:12.516209 | orchestrator | 2025-11-08 13:41:12.516222 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-08 13:41:12.516234 | orchestrator | Saturday 08 November 2025 13:41:08 +0000 (0:00:00.141) 0:00:38.773 ***** 2025-11-08 13:41:12.516246 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:12.516258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:12.516270 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.516282 | orchestrator | 2025-11-08 13:41:12.516294 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-08 13:41:12.516307 | orchestrator | Saturday 08 November 2025 13:41:08 +0000 (0:00:00.141) 0:00:38.915 ***** 2025-11-08 13:41:12.516319 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:12.516331 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:12.516343 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.516355 | orchestrator | 2025-11-08 13:41:12.516366 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-08 13:41:12.516396 | orchestrator | Saturday 08 November 2025 13:41:08 +0000 (0:00:00.153) 0:00:39.069 ***** 2025-11-08 13:41:12.516409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:12.516421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:12.516434 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.516446 | orchestrator | 2025-11-08 13:41:12.516458 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-08 13:41:12.516469 | orchestrator | Saturday 08 November 2025 13:41:08 +0000 (0:00:00.152) 0:00:39.221 ***** 2025-11-08 13:41:12.516480 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.516491 | orchestrator | 2025-11-08 13:41:12.516502 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-08 13:41:12.516512 | orchestrator | Saturday 08 November 2025 13:41:08 +0000 (0:00:00.133) 0:00:39.355 ***** 2025-11-08 13:41:12.516523 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.516533 | orchestrator | 2025-11-08 13:41:12.516544 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-08 13:41:12.516555 | orchestrator | Saturday 08 November 2025 13:41:09 +0000 (0:00:00.136) 0:00:39.491 ***** 2025-11-08 13:41:12.516565 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.516576 | orchestrator | 2025-11-08 13:41:12.516586 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-08 13:41:12.516597 | orchestrator | Saturday 08 November 2025 13:41:09 +0000 (0:00:00.143) 0:00:39.634 ***** 2025-11-08 13:41:12.516608 | orchestrator | ok: [testbed-node-4] => { 2025-11-08 13:41:12.516618 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-08 13:41:12.516629 | orchestrator | } 2025-11-08 13:41:12.516640 | orchestrator | 2025-11-08 13:41:12.516651 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-08 13:41:12.516662 | orchestrator | Saturday 08 November 2025 13:41:09 +0000 (0:00:00.156) 0:00:39.791 ***** 2025-11-08 13:41:12.516672 | orchestrator | ok: [testbed-node-4] => { 2025-11-08 13:41:12.516726 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-08 13:41:12.516737 | orchestrator | } 2025-11-08 13:41:12.516748 | orchestrator | 2025-11-08 13:41:12.516767 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-08 13:41:12.516778 | orchestrator | Saturday 08 November 2025 13:41:09 +0000 (0:00:00.146) 0:00:39.937 ***** 2025-11-08 13:41:12.516789 | orchestrator | ok: [testbed-node-4] => { 2025-11-08 13:41:12.516800 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-08 13:41:12.516811 | orchestrator | } 2025-11-08 13:41:12.516821 | orchestrator | 2025-11-08 13:41:12.516832 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-08 13:41:12.516843 | orchestrator | Saturday 08 November 2025 13:41:09 +0000 (0:00:00.346) 0:00:40.284 ***** 2025-11-08 13:41:12.516860 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:41:12.516871 | orchestrator | 2025-11-08 13:41:12.516882 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-08 13:41:12.516892 | orchestrator | Saturday 08 November 2025 13:41:10 +0000 (0:00:00.487) 0:00:40.772 ***** 2025-11-08 13:41:12.516903 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:41:12.516914 | orchestrator | 2025-11-08 13:41:12.516925 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-08 13:41:12.516935 | orchestrator | Saturday 08 November 2025 13:41:10 +0000 (0:00:00.528) 0:00:41.301 ***** 2025-11-08 13:41:12.516946 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:41:12.516957 | orchestrator | 2025-11-08 13:41:12.516968 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-08 13:41:12.516979 | orchestrator | Saturday 08 November 2025 13:41:11 +0000 (0:00:00.516) 0:00:41.817 ***** 2025-11-08 13:41:12.516989 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:41:12.517000 | orchestrator | 2025-11-08 13:41:12.517011 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-08 13:41:12.517021 | orchestrator | Saturday 08 November 2025 13:41:11 +0000 (0:00:00.149) 0:00:41.967 ***** 2025-11-08 13:41:12.517032 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.517043 | orchestrator | 2025-11-08 13:41:12.517053 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-08 13:41:12.517064 | orchestrator | Saturday 08 November 2025 13:41:11 +0000 (0:00:00.116) 0:00:42.083 ***** 2025-11-08 13:41:12.517075 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.517085 | orchestrator | 2025-11-08 13:41:12.517096 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-08 13:41:12.517107 | orchestrator | Saturday 08 November 2025 13:41:11 +0000 (0:00:00.113) 0:00:42.197 ***** 2025-11-08 13:41:12.517118 | orchestrator | ok: [testbed-node-4] => { 2025-11-08 13:41:12.517128 | orchestrator |  "vgs_report": { 2025-11-08 13:41:12.517140 | orchestrator |  "vg": [] 2025-11-08 13:41:12.517150 | orchestrator |  } 2025-11-08 13:41:12.517161 | orchestrator | } 2025-11-08 13:41:12.517172 | orchestrator | 2025-11-08 13:41:12.517183 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-08 13:41:12.517194 | orchestrator | Saturday 08 November 2025 13:41:11 +0000 (0:00:00.151) 0:00:42.348 ***** 2025-11-08 13:41:12.517204 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.517215 | orchestrator | 2025-11-08 13:41:12.517226 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-08 13:41:12.517237 | orchestrator | Saturday 08 November 2025 13:41:12 +0000 (0:00:00.129) 0:00:42.478 ***** 2025-11-08 13:41:12.517247 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.517258 | orchestrator | 2025-11-08 13:41:12.517269 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-08 13:41:12.517280 | orchestrator | Saturday 08 November 2025 13:41:12 +0000 (0:00:00.145) 0:00:42.624 ***** 2025-11-08 13:41:12.517290 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.517301 | orchestrator | 2025-11-08 13:41:12.517312 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-08 13:41:12.517323 | orchestrator | Saturday 08 November 2025 13:41:12 +0000 (0:00:00.136) 0:00:42.761 ***** 2025-11-08 13:41:12.517333 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:12.517344 | orchestrator | 2025-11-08 13:41:12.517370 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-08 13:41:17.206115 | orchestrator | Saturday 08 November 2025 13:41:12 +0000 (0:00:00.146) 0:00:42.908 ***** 2025-11-08 13:41:17.206216 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206232 | orchestrator | 2025-11-08 13:41:17.206245 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-08 13:41:17.206257 | orchestrator | Saturday 08 November 2025 13:41:12 +0000 (0:00:00.328) 0:00:43.236 ***** 2025-11-08 13:41:17.206268 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206279 | orchestrator | 2025-11-08 13:41:17.206291 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-08 13:41:17.206302 | orchestrator | Saturday 08 November 2025 13:41:13 +0000 (0:00:00.174) 0:00:43.411 ***** 2025-11-08 13:41:17.206313 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206324 | orchestrator | 2025-11-08 13:41:17.206334 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-08 13:41:17.206345 | orchestrator | Saturday 08 November 2025 13:41:13 +0000 (0:00:00.126) 0:00:43.537 ***** 2025-11-08 13:41:17.206356 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206367 | orchestrator | 2025-11-08 13:41:17.206378 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-08 13:41:17.206389 | orchestrator | Saturday 08 November 2025 13:41:13 +0000 (0:00:00.137) 0:00:43.674 ***** 2025-11-08 13:41:17.206400 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206410 | orchestrator | 2025-11-08 13:41:17.206421 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-08 13:41:17.206432 | orchestrator | Saturday 08 November 2025 13:41:13 +0000 (0:00:00.149) 0:00:43.823 ***** 2025-11-08 13:41:17.206443 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206454 | orchestrator | 2025-11-08 13:41:17.206465 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-08 13:41:17.206476 | orchestrator | Saturday 08 November 2025 13:41:13 +0000 (0:00:00.120) 0:00:43.944 ***** 2025-11-08 13:41:17.206486 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206497 | orchestrator | 2025-11-08 13:41:17.206508 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-08 13:41:17.206519 | orchestrator | Saturday 08 November 2025 13:41:13 +0000 (0:00:00.128) 0:00:44.073 ***** 2025-11-08 13:41:17.206529 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206540 | orchestrator | 2025-11-08 13:41:17.206551 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-08 13:41:17.206562 | orchestrator | Saturday 08 November 2025 13:41:13 +0000 (0:00:00.136) 0:00:44.209 ***** 2025-11-08 13:41:17.206573 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206583 | orchestrator | 2025-11-08 13:41:17.206594 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-08 13:41:17.206605 | orchestrator | Saturday 08 November 2025 13:41:13 +0000 (0:00:00.141) 0:00:44.350 ***** 2025-11-08 13:41:17.206618 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206631 | orchestrator | 2025-11-08 13:41:17.206644 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-08 13:41:17.206656 | orchestrator | Saturday 08 November 2025 13:41:14 +0000 (0:00:00.145) 0:00:44.496 ***** 2025-11-08 13:41:17.206669 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:17.206709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:17.206721 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206734 | orchestrator | 2025-11-08 13:41:17.206746 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-08 13:41:17.206759 | orchestrator | Saturday 08 November 2025 13:41:14 +0000 (0:00:00.158) 0:00:44.654 ***** 2025-11-08 13:41:17.206793 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:17.206806 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:17.206818 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206830 | orchestrator | 2025-11-08 13:41:17.206842 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-08 13:41:17.206854 | orchestrator | Saturday 08 November 2025 13:41:14 +0000 (0:00:00.164) 0:00:44.819 ***** 2025-11-08 13:41:17.206866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:17.206878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:17.206890 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206901 | orchestrator | 2025-11-08 13:41:17.206914 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-08 13:41:17.206925 | orchestrator | Saturday 08 November 2025 13:41:14 +0000 (0:00:00.363) 0:00:45.183 ***** 2025-11-08 13:41:17.206937 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:17.206950 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:17.206962 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.206973 | orchestrator | 2025-11-08 13:41:17.207002 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-08 13:41:17.207013 | orchestrator | Saturday 08 November 2025 13:41:14 +0000 (0:00:00.144) 0:00:45.327 ***** 2025-11-08 13:41:17.207024 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:17.207035 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:17.207046 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.207056 | orchestrator | 2025-11-08 13:41:17.207067 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-08 13:41:17.207078 | orchestrator | Saturday 08 November 2025 13:41:15 +0000 (0:00:00.162) 0:00:45.489 ***** 2025-11-08 13:41:17.207089 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:17.207100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:17.207111 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.207122 | orchestrator | 2025-11-08 13:41:17.207133 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-08 13:41:17.207144 | orchestrator | Saturday 08 November 2025 13:41:15 +0000 (0:00:00.158) 0:00:45.648 ***** 2025-11-08 13:41:17.207192 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:17.207204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:17.207215 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.207225 | orchestrator | 2025-11-08 13:41:17.207236 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-08 13:41:17.207254 | orchestrator | Saturday 08 November 2025 13:41:15 +0000 (0:00:00.147) 0:00:45.796 ***** 2025-11-08 13:41:17.207265 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:17.207282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:17.207293 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.207303 | orchestrator | 2025-11-08 13:41:17.207314 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-08 13:41:17.207325 | orchestrator | Saturday 08 November 2025 13:41:15 +0000 (0:00:00.158) 0:00:45.955 ***** 2025-11-08 13:41:17.207336 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:41:17.207347 | orchestrator | 2025-11-08 13:41:17.207357 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-08 13:41:17.207368 | orchestrator | Saturday 08 November 2025 13:41:16 +0000 (0:00:00.524) 0:00:46.479 ***** 2025-11-08 13:41:17.207379 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:41:17.207389 | orchestrator | 2025-11-08 13:41:17.207400 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-08 13:41:17.207411 | orchestrator | Saturday 08 November 2025 13:41:16 +0000 (0:00:00.506) 0:00:46.985 ***** 2025-11-08 13:41:17.207422 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:41:17.207432 | orchestrator | 2025-11-08 13:41:17.207443 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-08 13:41:17.207454 | orchestrator | Saturday 08 November 2025 13:41:16 +0000 (0:00:00.141) 0:00:47.127 ***** 2025-11-08 13:41:17.207465 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'vg_name': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'}) 2025-11-08 13:41:17.207477 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'vg_name': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'}) 2025-11-08 13:41:17.207488 | orchestrator | 2025-11-08 13:41:17.207499 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-08 13:41:17.207510 | orchestrator | Saturday 08 November 2025 13:41:16 +0000 (0:00:00.168) 0:00:47.295 ***** 2025-11-08 13:41:17.207521 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:17.207532 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:17.207543 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:17.207553 | orchestrator | 2025-11-08 13:41:17.207564 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-08 13:41:17.207575 | orchestrator | Saturday 08 November 2025 13:41:17 +0000 (0:00:00.152) 0:00:47.448 ***** 2025-11-08 13:41:17.207586 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:17.207603 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:23.293472 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:23.293592 | orchestrator | 2025-11-08 13:41:23.293609 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-08 13:41:23.293622 | orchestrator | Saturday 08 November 2025 13:41:17 +0000 (0:00:00.155) 0:00:47.604 ***** 2025-11-08 13:41:23.293633 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'})  2025-11-08 13:41:23.293646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'})  2025-11-08 13:41:23.293713 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:41:23.293726 | orchestrator | 2025-11-08 13:41:23.293738 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-08 13:41:23.293749 | orchestrator | Saturday 08 November 2025 13:41:17 +0000 (0:00:00.158) 0:00:47.762 ***** 2025-11-08 13:41:23.293759 | orchestrator | ok: [testbed-node-4] => { 2025-11-08 13:41:23.293770 | orchestrator |  "lvm_report": { 2025-11-08 13:41:23.293782 | orchestrator |  "lv": [ 2025-11-08 13:41:23.293792 | orchestrator |  { 2025-11-08 13:41:23.293804 | orchestrator |  "lv_name": "osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df", 2025-11-08 13:41:23.293815 | orchestrator |  "vg_name": "ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df" 2025-11-08 13:41:23.293825 | orchestrator |  }, 2025-11-08 13:41:23.293836 | orchestrator |  { 2025-11-08 13:41:23.293846 | orchestrator |  "lv_name": "osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202", 2025-11-08 13:41:23.293857 | orchestrator |  "vg_name": "ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202" 2025-11-08 13:41:23.293867 | orchestrator |  } 2025-11-08 13:41:23.293878 | orchestrator |  ], 2025-11-08 13:41:23.293889 | orchestrator |  "pv": [ 2025-11-08 13:41:23.293899 | orchestrator |  { 2025-11-08 13:41:23.293910 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-08 13:41:23.293920 | orchestrator |  "vg_name": "ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202" 2025-11-08 13:41:23.293931 | orchestrator |  }, 2025-11-08 13:41:23.293941 | orchestrator |  { 2025-11-08 13:41:23.293952 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-08 13:41:23.293962 | orchestrator |  "vg_name": "ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df" 2025-11-08 13:41:23.293973 | orchestrator |  } 2025-11-08 13:41:23.293983 | orchestrator |  ] 2025-11-08 13:41:23.293994 | orchestrator |  } 2025-11-08 13:41:23.294005 | orchestrator | } 2025-11-08 13:41:23.294073 | orchestrator | 2025-11-08 13:41:23.294088 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-08 13:41:23.294100 | orchestrator | 2025-11-08 13:41:23.294127 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-08 13:41:23.294140 | orchestrator | Saturday 08 November 2025 13:41:17 +0000 (0:00:00.487) 0:00:48.250 ***** 2025-11-08 13:41:23.294153 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-08 13:41:23.294166 | orchestrator | 2025-11-08 13:41:23.294178 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-08 13:41:23.294190 | orchestrator | Saturday 08 November 2025 13:41:18 +0000 (0:00:00.265) 0:00:48.516 ***** 2025-11-08 13:41:23.294202 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:41:23.294214 | orchestrator | 2025-11-08 13:41:23.294227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294239 | orchestrator | Saturday 08 November 2025 13:41:18 +0000 (0:00:00.239) 0:00:48.755 ***** 2025-11-08 13:41:23.294250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-11-08 13:41:23.294260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-11-08 13:41:23.294271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-11-08 13:41:23.294282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-11-08 13:41:23.294292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-11-08 13:41:23.294303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-11-08 13:41:23.294313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-11-08 13:41:23.294324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-11-08 13:41:23.294344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-11-08 13:41:23.294355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-11-08 13:41:23.294365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-11-08 13:41:23.294376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-11-08 13:41:23.294386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-11-08 13:41:23.294400 | orchestrator | 2025-11-08 13:41:23.294411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294422 | orchestrator | Saturday 08 November 2025 13:41:18 +0000 (0:00:00.416) 0:00:49.172 ***** 2025-11-08 13:41:23.294432 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:23.294443 | orchestrator | 2025-11-08 13:41:23.294453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294464 | orchestrator | Saturday 08 November 2025 13:41:18 +0000 (0:00:00.204) 0:00:49.376 ***** 2025-11-08 13:41:23.294475 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:23.294485 | orchestrator | 2025-11-08 13:41:23.294496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294524 | orchestrator | Saturday 08 November 2025 13:41:19 +0000 (0:00:00.198) 0:00:49.575 ***** 2025-11-08 13:41:23.294536 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:23.294547 | orchestrator | 2025-11-08 13:41:23.294557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294568 | orchestrator | Saturday 08 November 2025 13:41:19 +0000 (0:00:00.186) 0:00:49.761 ***** 2025-11-08 13:41:23.294578 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:23.294589 | orchestrator | 2025-11-08 13:41:23.294600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294610 | orchestrator | Saturday 08 November 2025 13:41:19 +0000 (0:00:00.212) 0:00:49.974 ***** 2025-11-08 13:41:23.294621 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:23.294631 | orchestrator | 2025-11-08 13:41:23.294642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294653 | orchestrator | Saturday 08 November 2025 13:41:20 +0000 (0:00:00.685) 0:00:50.659 ***** 2025-11-08 13:41:23.294663 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:23.294674 | orchestrator | 2025-11-08 13:41:23.294714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294725 | orchestrator | Saturday 08 November 2025 13:41:20 +0000 (0:00:00.204) 0:00:50.864 ***** 2025-11-08 13:41:23.294736 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:23.294746 | orchestrator | 2025-11-08 13:41:23.294756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294767 | orchestrator | Saturday 08 November 2025 13:41:20 +0000 (0:00:00.243) 0:00:51.107 ***** 2025-11-08 13:41:23.294777 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:23.294788 | orchestrator | 2025-11-08 13:41:23.294798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294809 | orchestrator | Saturday 08 November 2025 13:41:20 +0000 (0:00:00.214) 0:00:51.322 ***** 2025-11-08 13:41:23.294820 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165) 2025-11-08 13:41:23.294831 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165) 2025-11-08 13:41:23.294842 | orchestrator | 2025-11-08 13:41:23.294853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294863 | orchestrator | Saturday 08 November 2025 13:41:21 +0000 (0:00:00.387) 0:00:51.710 ***** 2025-11-08 13:41:23.294874 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff) 2025-11-08 13:41:23.294884 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff) 2025-11-08 13:41:23.294902 | orchestrator | 2025-11-08 13:41:23.294919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294930 | orchestrator | Saturday 08 November 2025 13:41:21 +0000 (0:00:00.406) 0:00:52.116 ***** 2025-11-08 13:41:23.294941 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36) 2025-11-08 13:41:23.294951 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36) 2025-11-08 13:41:23.294962 | orchestrator | 2025-11-08 13:41:23.294972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.294983 | orchestrator | Saturday 08 November 2025 13:41:22 +0000 (0:00:00.413) 0:00:52.530 ***** 2025-11-08 13:41:23.294993 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995) 2025-11-08 13:41:23.295004 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995) 2025-11-08 13:41:23.295014 | orchestrator | 2025-11-08 13:41:23.295025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-08 13:41:23.295035 | orchestrator | Saturday 08 November 2025 13:41:22 +0000 (0:00:00.423) 0:00:52.954 ***** 2025-11-08 13:41:23.295046 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-08 13:41:23.295056 | orchestrator | 2025-11-08 13:41:23.295067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:23.295077 | orchestrator | Saturday 08 November 2025 13:41:22 +0000 (0:00:00.349) 0:00:53.303 ***** 2025-11-08 13:41:23.295088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-11-08 13:41:23.295098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-11-08 13:41:23.295109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-11-08 13:41:23.295119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-11-08 13:41:23.295130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-11-08 13:41:23.295140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-11-08 13:41:23.295151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-11-08 13:41:23.295162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-11-08 13:41:23.295172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-11-08 13:41:23.295183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-11-08 13:41:23.295193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-11-08 13:41:23.295211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-11-08 13:41:31.652376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-11-08 13:41:31.652485 | orchestrator | 2025-11-08 13:41:31.652501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.652513 | orchestrator | Saturday 08 November 2025 13:41:23 +0000 (0:00:00.384) 0:00:53.688 ***** 2025-11-08 13:41:31.652524 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.652536 | orchestrator | 2025-11-08 13:41:31.652549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.652559 | orchestrator | Saturday 08 November 2025 13:41:23 +0000 (0:00:00.189) 0:00:53.877 ***** 2025-11-08 13:41:31.652570 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.652581 | orchestrator | 2025-11-08 13:41:31.652592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.652629 | orchestrator | Saturday 08 November 2025 13:41:23 +0000 (0:00:00.477) 0:00:54.354 ***** 2025-11-08 13:41:31.652640 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.652651 | orchestrator | 2025-11-08 13:41:31.652662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.652673 | orchestrator | Saturday 08 November 2025 13:41:24 +0000 (0:00:00.169) 0:00:54.523 ***** 2025-11-08 13:41:31.652734 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.652745 | orchestrator | 2025-11-08 13:41:31.652756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.652767 | orchestrator | Saturday 08 November 2025 13:41:24 +0000 (0:00:00.180) 0:00:54.704 ***** 2025-11-08 13:41:31.652778 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.652788 | orchestrator | 2025-11-08 13:41:31.652799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.652810 | orchestrator | Saturday 08 November 2025 13:41:24 +0000 (0:00:00.189) 0:00:54.894 ***** 2025-11-08 13:41:31.652820 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.652831 | orchestrator | 2025-11-08 13:41:31.652841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.652852 | orchestrator | Saturday 08 November 2025 13:41:24 +0000 (0:00:00.195) 0:00:55.089 ***** 2025-11-08 13:41:31.652862 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.652873 | orchestrator | 2025-11-08 13:41:31.652884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.652894 | orchestrator | Saturday 08 November 2025 13:41:24 +0000 (0:00:00.191) 0:00:55.281 ***** 2025-11-08 13:41:31.652906 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.652918 | orchestrator | 2025-11-08 13:41:31.652930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.652942 | orchestrator | Saturday 08 November 2025 13:41:25 +0000 (0:00:00.178) 0:00:55.459 ***** 2025-11-08 13:41:31.652954 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-11-08 13:41:31.652966 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-11-08 13:41:31.652979 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-11-08 13:41:31.652991 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-11-08 13:41:31.653003 | orchestrator | 2025-11-08 13:41:31.653015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.653027 | orchestrator | Saturday 08 November 2025 13:41:25 +0000 (0:00:00.575) 0:00:56.035 ***** 2025-11-08 13:41:31.653039 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653051 | orchestrator | 2025-11-08 13:41:31.653063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.653075 | orchestrator | Saturday 08 November 2025 13:41:25 +0000 (0:00:00.206) 0:00:56.241 ***** 2025-11-08 13:41:31.653087 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653098 | orchestrator | 2025-11-08 13:41:31.653110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.653123 | orchestrator | Saturday 08 November 2025 13:41:26 +0000 (0:00:00.197) 0:00:56.438 ***** 2025-11-08 13:41:31.653135 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653146 | orchestrator | 2025-11-08 13:41:31.653158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-08 13:41:31.653170 | orchestrator | Saturday 08 November 2025 13:41:26 +0000 (0:00:00.180) 0:00:56.619 ***** 2025-11-08 13:41:31.653181 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653193 | orchestrator | 2025-11-08 13:41:31.653205 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-08 13:41:31.653218 | orchestrator | Saturday 08 November 2025 13:41:26 +0000 (0:00:00.191) 0:00:56.810 ***** 2025-11-08 13:41:31.653229 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653241 | orchestrator | 2025-11-08 13:41:31.653253 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-08 13:41:31.653273 | orchestrator | Saturday 08 November 2025 13:41:26 +0000 (0:00:00.261) 0:00:57.072 ***** 2025-11-08 13:41:31.653284 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '56ba2a68-c761-5674-9bd2-a2481e6b0580'}}) 2025-11-08 13:41:31.653296 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5af892c-b8e4-5298-acf4-1670635abe97'}}) 2025-11-08 13:41:31.653306 | orchestrator | 2025-11-08 13:41:31.653317 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-08 13:41:31.653328 | orchestrator | Saturday 08 November 2025 13:41:26 +0000 (0:00:00.189) 0:00:57.261 ***** 2025-11-08 13:41:31.653340 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'}) 2025-11-08 13:41:31.653370 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'}) 2025-11-08 13:41:31.653382 | orchestrator | 2025-11-08 13:41:31.653392 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-08 13:41:31.653420 | orchestrator | Saturday 08 November 2025 13:41:28 +0000 (0:00:01.813) 0:00:59.074 ***** 2025-11-08 13:41:31.653432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:31.653444 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:31.653455 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653465 | orchestrator | 2025-11-08 13:41:31.653477 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-08 13:41:31.653488 | orchestrator | Saturday 08 November 2025 13:41:28 +0000 (0:00:00.156) 0:00:59.231 ***** 2025-11-08 13:41:31.653498 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'}) 2025-11-08 13:41:31.653509 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'}) 2025-11-08 13:41:31.653520 | orchestrator | 2025-11-08 13:41:31.653531 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-08 13:41:31.653541 | orchestrator | Saturday 08 November 2025 13:41:30 +0000 (0:00:01.297) 0:01:00.528 ***** 2025-11-08 13:41:31.653552 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:31.653562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:31.653573 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653583 | orchestrator | 2025-11-08 13:41:31.653594 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-08 13:41:31.653604 | orchestrator | Saturday 08 November 2025 13:41:30 +0000 (0:00:00.154) 0:01:00.682 ***** 2025-11-08 13:41:31.653615 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653647 | orchestrator | 2025-11-08 13:41:31.653658 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-08 13:41:31.653669 | orchestrator | Saturday 08 November 2025 13:41:30 +0000 (0:00:00.138) 0:01:00.821 ***** 2025-11-08 13:41:31.653719 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:31.653731 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:31.653742 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653761 | orchestrator | 2025-11-08 13:41:31.653772 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-08 13:41:31.653782 | orchestrator | Saturday 08 November 2025 13:41:30 +0000 (0:00:00.140) 0:01:00.962 ***** 2025-11-08 13:41:31.653793 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653803 | orchestrator | 2025-11-08 13:41:31.653814 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-08 13:41:31.653825 | orchestrator | Saturday 08 November 2025 13:41:30 +0000 (0:00:00.140) 0:01:01.102 ***** 2025-11-08 13:41:31.653835 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:31.653846 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:31.653856 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653867 | orchestrator | 2025-11-08 13:41:31.653878 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-08 13:41:31.653889 | orchestrator | Saturday 08 November 2025 13:41:30 +0000 (0:00:00.148) 0:01:01.251 ***** 2025-11-08 13:41:31.653899 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653910 | orchestrator | 2025-11-08 13:41:31.653920 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-08 13:41:31.653931 | orchestrator | Saturday 08 November 2025 13:41:30 +0000 (0:00:00.138) 0:01:01.390 ***** 2025-11-08 13:41:31.653941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:31.653952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:31.653963 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:31.653974 | orchestrator | 2025-11-08 13:41:31.653984 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-08 13:41:31.653995 | orchestrator | Saturday 08 November 2025 13:41:31 +0000 (0:00:00.164) 0:01:01.554 ***** 2025-11-08 13:41:31.654005 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:41:31.654076 | orchestrator | 2025-11-08 13:41:31.654091 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-08 13:41:31.654102 | orchestrator | Saturday 08 November 2025 13:41:31 +0000 (0:00:00.334) 0:01:01.889 ***** 2025-11-08 13:41:31.654122 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:37.661146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:37.661254 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.661271 | orchestrator | 2025-11-08 13:41:37.661284 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-08 13:41:37.661297 | orchestrator | Saturday 08 November 2025 13:41:31 +0000 (0:00:00.161) 0:01:02.050 ***** 2025-11-08 13:41:37.661309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:37.661320 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:37.661331 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.661342 | orchestrator | 2025-11-08 13:41:37.661353 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-08 13:41:37.661364 | orchestrator | Saturday 08 November 2025 13:41:31 +0000 (0:00:00.152) 0:01:02.202 ***** 2025-11-08 13:41:37.661375 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:37.661408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:37.661420 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.661430 | orchestrator | 2025-11-08 13:41:37.661441 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-08 13:41:37.661452 | orchestrator | Saturday 08 November 2025 13:41:31 +0000 (0:00:00.172) 0:01:02.375 ***** 2025-11-08 13:41:37.661463 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.661473 | orchestrator | 2025-11-08 13:41:37.661484 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-08 13:41:37.661495 | orchestrator | Saturday 08 November 2025 13:41:32 +0000 (0:00:00.151) 0:01:02.526 ***** 2025-11-08 13:41:37.661506 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.661516 | orchestrator | 2025-11-08 13:41:37.661527 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-08 13:41:37.661552 | orchestrator | Saturday 08 November 2025 13:41:32 +0000 (0:00:00.159) 0:01:02.686 ***** 2025-11-08 13:41:37.661563 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.661574 | orchestrator | 2025-11-08 13:41:37.661584 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-08 13:41:37.661595 | orchestrator | Saturday 08 November 2025 13:41:32 +0000 (0:00:00.139) 0:01:02.825 ***** 2025-11-08 13:41:37.661606 | orchestrator | ok: [testbed-node-5] => { 2025-11-08 13:41:37.661617 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-08 13:41:37.661628 | orchestrator | } 2025-11-08 13:41:37.661639 | orchestrator | 2025-11-08 13:41:37.661650 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-08 13:41:37.661661 | orchestrator | Saturday 08 November 2025 13:41:32 +0000 (0:00:00.143) 0:01:02.969 ***** 2025-11-08 13:41:37.661672 | orchestrator | ok: [testbed-node-5] => { 2025-11-08 13:41:37.661712 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-08 13:41:37.661725 | orchestrator | } 2025-11-08 13:41:37.661737 | orchestrator | 2025-11-08 13:41:37.661750 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-08 13:41:37.661762 | orchestrator | Saturday 08 November 2025 13:41:32 +0000 (0:00:00.135) 0:01:03.104 ***** 2025-11-08 13:41:37.661774 | orchestrator | ok: [testbed-node-5] => { 2025-11-08 13:41:37.661786 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-08 13:41:37.661798 | orchestrator | } 2025-11-08 13:41:37.661810 | orchestrator | 2025-11-08 13:41:37.661822 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-08 13:41:37.661835 | orchestrator | Saturday 08 November 2025 13:41:32 +0000 (0:00:00.150) 0:01:03.255 ***** 2025-11-08 13:41:37.661847 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:41:37.661858 | orchestrator | 2025-11-08 13:41:37.661869 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-08 13:41:37.661880 | orchestrator | Saturday 08 November 2025 13:41:33 +0000 (0:00:00.506) 0:01:03.762 ***** 2025-11-08 13:41:37.661891 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:41:37.661902 | orchestrator | 2025-11-08 13:41:37.661913 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-08 13:41:37.661924 | orchestrator | Saturday 08 November 2025 13:41:33 +0000 (0:00:00.498) 0:01:04.260 ***** 2025-11-08 13:41:37.661934 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:41:37.661945 | orchestrator | 2025-11-08 13:41:37.661956 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-08 13:41:37.661967 | orchestrator | Saturday 08 November 2025 13:41:34 +0000 (0:00:00.712) 0:01:04.972 ***** 2025-11-08 13:41:37.661978 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:41:37.661989 | orchestrator | 2025-11-08 13:41:37.661999 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-08 13:41:37.662010 | orchestrator | Saturday 08 November 2025 13:41:34 +0000 (0:00:00.152) 0:01:05.125 ***** 2025-11-08 13:41:37.662095 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662107 | orchestrator | 2025-11-08 13:41:37.662118 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-08 13:41:37.662129 | orchestrator | Saturday 08 November 2025 13:41:34 +0000 (0:00:00.117) 0:01:05.242 ***** 2025-11-08 13:41:37.662140 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662151 | orchestrator | 2025-11-08 13:41:37.662161 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-08 13:41:37.662172 | orchestrator | Saturday 08 November 2025 13:41:34 +0000 (0:00:00.112) 0:01:05.355 ***** 2025-11-08 13:41:37.662183 | orchestrator | ok: [testbed-node-5] => { 2025-11-08 13:41:37.662194 | orchestrator |  "vgs_report": { 2025-11-08 13:41:37.662205 | orchestrator |  "vg": [] 2025-11-08 13:41:37.662233 | orchestrator |  } 2025-11-08 13:41:37.662246 | orchestrator | } 2025-11-08 13:41:37.662257 | orchestrator | 2025-11-08 13:41:37.662268 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-08 13:41:37.662279 | orchestrator | Saturday 08 November 2025 13:41:35 +0000 (0:00:00.144) 0:01:05.499 ***** 2025-11-08 13:41:37.662289 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662300 | orchestrator | 2025-11-08 13:41:37.662311 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-08 13:41:37.662322 | orchestrator | Saturday 08 November 2025 13:41:35 +0000 (0:00:00.134) 0:01:05.634 ***** 2025-11-08 13:41:37.662332 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662343 | orchestrator | 2025-11-08 13:41:37.662354 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-08 13:41:37.662365 | orchestrator | Saturday 08 November 2025 13:41:35 +0000 (0:00:00.144) 0:01:05.778 ***** 2025-11-08 13:41:37.662375 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662386 | orchestrator | 2025-11-08 13:41:37.662397 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-08 13:41:37.662408 | orchestrator | Saturday 08 November 2025 13:41:35 +0000 (0:00:00.133) 0:01:05.912 ***** 2025-11-08 13:41:37.662419 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662430 | orchestrator | 2025-11-08 13:41:37.662440 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-08 13:41:37.662451 | orchestrator | Saturday 08 November 2025 13:41:35 +0000 (0:00:00.129) 0:01:06.041 ***** 2025-11-08 13:41:37.662462 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662473 | orchestrator | 2025-11-08 13:41:37.662483 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-08 13:41:37.662494 | orchestrator | Saturday 08 November 2025 13:41:35 +0000 (0:00:00.130) 0:01:06.172 ***** 2025-11-08 13:41:37.662505 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662515 | orchestrator | 2025-11-08 13:41:37.662526 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-08 13:41:37.662537 | orchestrator | Saturday 08 November 2025 13:41:35 +0000 (0:00:00.140) 0:01:06.312 ***** 2025-11-08 13:41:37.662548 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662558 | orchestrator | 2025-11-08 13:41:37.662569 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-08 13:41:37.662580 | orchestrator | Saturday 08 November 2025 13:41:36 +0000 (0:00:00.138) 0:01:06.450 ***** 2025-11-08 13:41:37.662591 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662602 | orchestrator | 2025-11-08 13:41:37.662612 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-08 13:41:37.662629 | orchestrator | Saturday 08 November 2025 13:41:36 +0000 (0:00:00.330) 0:01:06.781 ***** 2025-11-08 13:41:37.662640 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662651 | orchestrator | 2025-11-08 13:41:37.662661 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-08 13:41:37.662672 | orchestrator | Saturday 08 November 2025 13:41:36 +0000 (0:00:00.145) 0:01:06.926 ***** 2025-11-08 13:41:37.662702 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662720 | orchestrator | 2025-11-08 13:41:37.662731 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-08 13:41:37.662742 | orchestrator | Saturday 08 November 2025 13:41:36 +0000 (0:00:00.147) 0:01:07.073 ***** 2025-11-08 13:41:37.662752 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662763 | orchestrator | 2025-11-08 13:41:37.662774 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-08 13:41:37.662785 | orchestrator | Saturday 08 November 2025 13:41:36 +0000 (0:00:00.135) 0:01:07.209 ***** 2025-11-08 13:41:37.662796 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662806 | orchestrator | 2025-11-08 13:41:37.662817 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-08 13:41:37.662828 | orchestrator | Saturday 08 November 2025 13:41:36 +0000 (0:00:00.121) 0:01:07.330 ***** 2025-11-08 13:41:37.662839 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662849 | orchestrator | 2025-11-08 13:41:37.662860 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-08 13:41:37.662871 | orchestrator | Saturday 08 November 2025 13:41:37 +0000 (0:00:00.138) 0:01:07.469 ***** 2025-11-08 13:41:37.662881 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662892 | orchestrator | 2025-11-08 13:41:37.662903 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-08 13:41:37.662913 | orchestrator | Saturday 08 November 2025 13:41:37 +0000 (0:00:00.130) 0:01:07.600 ***** 2025-11-08 13:41:37.662924 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:37.662935 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:37.662946 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.662957 | orchestrator | 2025-11-08 13:41:37.662968 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-08 13:41:37.662978 | orchestrator | Saturday 08 November 2025 13:41:37 +0000 (0:00:00.150) 0:01:07.751 ***** 2025-11-08 13:41:37.662989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:37.663000 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:37.663011 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:37.663021 | orchestrator | 2025-11-08 13:41:37.663032 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-08 13:41:37.663042 | orchestrator | Saturday 08 November 2025 13:41:37 +0000 (0:00:00.151) 0:01:07.903 ***** 2025-11-08 13:41:37.663061 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:40.597012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:40.597120 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:40.597137 | orchestrator | 2025-11-08 13:41:40.597149 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-08 13:41:40.597162 | orchestrator | Saturday 08 November 2025 13:41:37 +0000 (0:00:00.155) 0:01:08.059 ***** 2025-11-08 13:41:40.597173 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:40.597185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:40.597196 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:40.597206 | orchestrator | 2025-11-08 13:41:40.597239 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-08 13:41:40.597250 | orchestrator | Saturday 08 November 2025 13:41:37 +0000 (0:00:00.155) 0:01:08.215 ***** 2025-11-08 13:41:40.597262 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:40.597273 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:40.597284 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:40.597294 | orchestrator | 2025-11-08 13:41:40.597305 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-08 13:41:40.597316 | orchestrator | Saturday 08 November 2025 13:41:37 +0000 (0:00:00.140) 0:01:08.355 ***** 2025-11-08 13:41:40.597327 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:40.597338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:40.597349 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:40.597359 | orchestrator | 2025-11-08 13:41:40.597370 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-08 13:41:40.597381 | orchestrator | Saturday 08 November 2025 13:41:38 +0000 (0:00:00.345) 0:01:08.701 ***** 2025-11-08 13:41:40.597392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:40.597403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:40.597414 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:40.597425 | orchestrator | 2025-11-08 13:41:40.597435 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-08 13:41:40.597446 | orchestrator | Saturday 08 November 2025 13:41:38 +0000 (0:00:00.170) 0:01:08.871 ***** 2025-11-08 13:41:40.597457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:40.597468 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:40.597479 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:40.597489 | orchestrator | 2025-11-08 13:41:40.597500 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-08 13:41:40.597511 | orchestrator | Saturday 08 November 2025 13:41:38 +0000 (0:00:00.152) 0:01:09.024 ***** 2025-11-08 13:41:40.597522 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:41:40.597533 | orchestrator | 2025-11-08 13:41:40.597544 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-08 13:41:40.597555 | orchestrator | Saturday 08 November 2025 13:41:39 +0000 (0:00:00.501) 0:01:09.525 ***** 2025-11-08 13:41:40.597565 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:41:40.597576 | orchestrator | 2025-11-08 13:41:40.597587 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-08 13:41:40.597598 | orchestrator | Saturday 08 November 2025 13:41:39 +0000 (0:00:00.512) 0:01:10.038 ***** 2025-11-08 13:41:40.597609 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:41:40.597619 | orchestrator | 2025-11-08 13:41:40.597630 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-08 13:41:40.597641 | orchestrator | Saturday 08 November 2025 13:41:39 +0000 (0:00:00.156) 0:01:10.194 ***** 2025-11-08 13:41:40.597652 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'vg_name': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'}) 2025-11-08 13:41:40.597672 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'vg_name': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'}) 2025-11-08 13:41:40.597715 | orchestrator | 2025-11-08 13:41:40.597727 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-08 13:41:40.597738 | orchestrator | Saturday 08 November 2025 13:41:39 +0000 (0:00:00.173) 0:01:10.368 ***** 2025-11-08 13:41:40.597785 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:40.597798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:40.597810 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:40.597820 | orchestrator | 2025-11-08 13:41:40.597832 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-08 13:41:40.597843 | orchestrator | Saturday 08 November 2025 13:41:40 +0000 (0:00:00.143) 0:01:10.511 ***** 2025-11-08 13:41:40.597854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:40.597865 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:40.597876 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:40.597886 | orchestrator | 2025-11-08 13:41:40.597897 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-08 13:41:40.597908 | orchestrator | Saturday 08 November 2025 13:41:40 +0000 (0:00:00.155) 0:01:10.667 ***** 2025-11-08 13:41:40.597918 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'})  2025-11-08 13:41:40.597929 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'})  2025-11-08 13:41:40.597940 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:41:40.597950 | orchestrator | 2025-11-08 13:41:40.597961 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-08 13:41:40.597971 | orchestrator | Saturday 08 November 2025 13:41:40 +0000 (0:00:00.149) 0:01:10.818 ***** 2025-11-08 13:41:40.597982 | orchestrator | ok: [testbed-node-5] => { 2025-11-08 13:41:40.597993 | orchestrator |  "lvm_report": { 2025-11-08 13:41:40.598008 | orchestrator |  "lv": [ 2025-11-08 13:41:40.598112 | orchestrator |  { 2025-11-08 13:41:40.598128 | orchestrator |  "lv_name": "osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580", 2025-11-08 13:41:40.598140 | orchestrator |  "vg_name": "ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580" 2025-11-08 13:41:40.598151 | orchestrator |  }, 2025-11-08 13:41:40.598161 | orchestrator |  { 2025-11-08 13:41:40.598172 | orchestrator |  "lv_name": "osd-block-b5af892c-b8e4-5298-acf4-1670635abe97", 2025-11-08 13:41:40.598183 | orchestrator |  "vg_name": "ceph-b5af892c-b8e4-5298-acf4-1670635abe97" 2025-11-08 13:41:40.598194 | orchestrator |  } 2025-11-08 13:41:40.598204 | orchestrator |  ], 2025-11-08 13:41:40.598215 | orchestrator |  "pv": [ 2025-11-08 13:41:40.598226 | orchestrator |  { 2025-11-08 13:41:40.598237 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-08 13:41:40.598248 | orchestrator |  "vg_name": "ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580" 2025-11-08 13:41:40.598259 | orchestrator |  }, 2025-11-08 13:41:40.598269 | orchestrator |  { 2025-11-08 13:41:40.598280 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-08 13:41:40.598291 | orchestrator |  "vg_name": "ceph-b5af892c-b8e4-5298-acf4-1670635abe97" 2025-11-08 13:41:40.598320 | orchestrator |  } 2025-11-08 13:41:40.598331 | orchestrator |  ] 2025-11-08 13:41:40.598342 | orchestrator |  } 2025-11-08 13:41:40.598353 | orchestrator | } 2025-11-08 13:41:40.598364 | orchestrator | 2025-11-08 13:41:40.598374 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:41:40.598385 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-08 13:41:40.598396 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-08 13:41:40.598407 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-08 13:41:40.598418 | orchestrator | 2025-11-08 13:41:40.598429 | orchestrator | 2025-11-08 13:41:40.598440 | orchestrator | 2025-11-08 13:41:40.598451 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:41:40.598461 | orchestrator | Saturday 08 November 2025 13:41:40 +0000 (0:00:00.161) 0:01:10.979 ***** 2025-11-08 13:41:40.598472 | orchestrator | =============================================================================== 2025-11-08 13:41:40.598483 | orchestrator | Create block VGs -------------------------------------------------------- 5.71s 2025-11-08 13:41:40.598494 | orchestrator | Create block LVs -------------------------------------------------------- 4.09s 2025-11-08 13:41:40.598504 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.76s 2025-11-08 13:41:40.598515 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.63s 2025-11-08 13:41:40.598526 | orchestrator | Add known partitions to the list of available block devices ------------- 1.56s 2025-11-08 13:41:40.598537 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2025-11-08 13:41:40.598548 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.52s 2025-11-08 13:41:40.598559 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2025-11-08 13:41:40.598579 | orchestrator | Add known links to the list of available block devices ------------------ 1.32s 2025-11-08 13:41:41.015830 | orchestrator | Add known partitions to the list of available block devices ------------- 1.22s 2025-11-08 13:41:41.015931 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2025-11-08 13:41:41.015946 | orchestrator | Print LVM report data --------------------------------------------------- 0.98s 2025-11-08 13:41:41.015959 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-11-08 13:41:41.015970 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.76s 2025-11-08 13:41:41.015981 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2025-11-08 13:41:41.015992 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.69s 2025-11-08 13:41:41.016004 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.69s 2025-11-08 13:41:41.016015 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-11-08 13:41:41.016027 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-11-08 13:41:41.016038 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.67s 2025-11-08 13:41:53.326236 | orchestrator | 2025-11-08 13:41:53 | INFO  | Task c73ce373-d28a-4af2-95d4-452a122f1eab (facts) was prepared for execution. 2025-11-08 13:41:53.326350 | orchestrator | 2025-11-08 13:41:53 | INFO  | It takes a moment until task c73ce373-d28a-4af2-95d4-452a122f1eab (facts) has been started and output is visible here. 2025-11-08 13:42:06.573729 | orchestrator | 2025-11-08 13:42:06.573839 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-08 13:42:06.573853 | orchestrator | 2025-11-08 13:42:06.573862 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-08 13:42:06.573899 | orchestrator | Saturday 08 November 2025 13:41:57 +0000 (0:00:00.267) 0:00:00.267 ***** 2025-11-08 13:42:06.573911 | orchestrator | ok: [testbed-manager] 2025-11-08 13:42:06.573925 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:42:06.573936 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:42:06.573948 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:42:06.573961 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:42:06.573972 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:42:06.574003 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:42:06.574012 | orchestrator | 2025-11-08 13:42:06.574070 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-08 13:42:06.574078 | orchestrator | Saturday 08 November 2025 13:41:58 +0000 (0:00:01.125) 0:00:01.393 ***** 2025-11-08 13:42:06.574085 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:42:06.574115 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:42:06.574124 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:42:06.574131 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:42:06.574138 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:42:06.574145 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:42:06.574152 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:42:06.574159 | orchestrator | 2025-11-08 13:42:06.574167 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-08 13:42:06.574174 | orchestrator | 2025-11-08 13:42:06.574181 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-08 13:42:06.574188 | orchestrator | Saturday 08 November 2025 13:41:59 +0000 (0:00:01.238) 0:00:02.632 ***** 2025-11-08 13:42:06.574195 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:42:06.574202 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:42:06.574210 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:42:06.574217 | orchestrator | ok: [testbed-manager] 2025-11-08 13:42:06.574224 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:42:06.574231 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:42:06.574239 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:42:06.574247 | orchestrator | 2025-11-08 13:42:06.574255 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-08 13:42:06.574263 | orchestrator | 2025-11-08 13:42:06.574272 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-08 13:42:06.574281 | orchestrator | Saturday 08 November 2025 13:42:05 +0000 (0:00:05.670) 0:00:08.302 ***** 2025-11-08 13:42:06.574289 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:42:06.574297 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:42:06.574305 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:42:06.574314 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:42:06.574322 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:42:06.574330 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:42:06.574337 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:42:06.574346 | orchestrator | 2025-11-08 13:42:06.574354 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:42:06.574362 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:42:06.574373 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:42:06.574381 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:42:06.574390 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:42:06.574398 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:42:06.574406 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:42:06.574423 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:42:06.574431 | orchestrator | 2025-11-08 13:42:06.574439 | orchestrator | 2025-11-08 13:42:06.574448 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:42:06.574457 | orchestrator | Saturday 08 November 2025 13:42:06 +0000 (0:00:00.545) 0:00:08.848 ***** 2025-11-08 13:42:06.574465 | orchestrator | =============================================================================== 2025-11-08 13:42:06.574473 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.67s 2025-11-08 13:42:06.574481 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-11-08 13:42:06.574490 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-11-08 13:42:06.574498 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-11-08 13:42:19.012507 | orchestrator | 2025-11-08 13:42:19 | INFO  | Task 6d895e47-4442-47f9-8871-9280f4d81a0e (frr) was prepared for execution. 2025-11-08 13:42:19.012586 | orchestrator | 2025-11-08 13:42:19 | INFO  | It takes a moment until task 6d895e47-4442-47f9-8871-9280f4d81a0e (frr) has been started and output is visible here. 2025-11-08 13:42:46.345955 | orchestrator | 2025-11-08 13:42:46.346065 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-11-08 13:42:46.346073 | orchestrator | 2025-11-08 13:42:46.346077 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-11-08 13:42:46.346081 | orchestrator | Saturday 08 November 2025 13:42:22 +0000 (0:00:00.230) 0:00:00.230 ***** 2025-11-08 13:42:46.346086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-11-08 13:42:46.346092 | orchestrator | 2025-11-08 13:42:46.346096 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-11-08 13:42:46.346100 | orchestrator | Saturday 08 November 2025 13:42:23 +0000 (0:00:00.221) 0:00:00.452 ***** 2025-11-08 13:42:46.346104 | orchestrator | changed: [testbed-manager] 2025-11-08 13:42:46.346108 | orchestrator | 2025-11-08 13:42:46.346123 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-11-08 13:42:46.346127 | orchestrator | Saturday 08 November 2025 13:42:24 +0000 (0:00:01.164) 0:00:01.616 ***** 2025-11-08 13:42:46.346131 | orchestrator | changed: [testbed-manager] 2025-11-08 13:42:46.346134 | orchestrator | 2025-11-08 13:42:46.346138 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-11-08 13:42:46.346142 | orchestrator | Saturday 08 November 2025 13:42:34 +0000 (0:00:09.960) 0:00:11.577 ***** 2025-11-08 13:42:46.346145 | orchestrator | ok: [testbed-manager] 2025-11-08 13:42:46.346150 | orchestrator | 2025-11-08 13:42:46.346153 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-11-08 13:42:46.346157 | orchestrator | Saturday 08 November 2025 13:42:35 +0000 (0:00:01.064) 0:00:12.642 ***** 2025-11-08 13:42:46.346161 | orchestrator | changed: [testbed-manager] 2025-11-08 13:42:46.346165 | orchestrator | 2025-11-08 13:42:46.346168 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-11-08 13:42:46.346172 | orchestrator | Saturday 08 November 2025 13:42:36 +0000 (0:00:00.908) 0:00:13.551 ***** 2025-11-08 13:42:46.346176 | orchestrator | ok: [testbed-manager] 2025-11-08 13:42:46.346180 | orchestrator | 2025-11-08 13:42:46.346183 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-11-08 13:42:46.346188 | orchestrator | Saturday 08 November 2025 13:42:37 +0000 (0:00:01.220) 0:00:14.771 ***** 2025-11-08 13:42:46.346191 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:42:46.346195 | orchestrator | 2025-11-08 13:42:46.346199 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2025-11-08 13:42:46.346218 | orchestrator | Saturday 08 November 2025 13:42:37 +0000 (0:00:00.148) 0:00:14.919 ***** 2025-11-08 13:42:46.346222 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:42:46.346226 | orchestrator | 2025-11-08 13:42:46.346230 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2025-11-08 13:42:46.346233 | orchestrator | Saturday 08 November 2025 13:42:37 +0000 (0:00:00.160) 0:00:15.080 ***** 2025-11-08 13:42:46.346237 | orchestrator | changed: [testbed-manager] 2025-11-08 13:42:46.346241 | orchestrator | 2025-11-08 13:42:46.346244 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-11-08 13:42:46.346248 | orchestrator | Saturday 08 November 2025 13:42:38 +0000 (0:00:00.949) 0:00:16.029 ***** 2025-11-08 13:42:46.346252 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-11-08 13:42:46.346255 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-11-08 13:42:46.346260 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-11-08 13:42:46.346263 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-11-08 13:42:46.346267 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-11-08 13:42:46.346271 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-11-08 13:42:46.346275 | orchestrator | 2025-11-08 13:42:46.346278 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-11-08 13:42:46.346282 | orchestrator | Saturday 08 November 2025 13:42:41 +0000 (0:00:03.235) 0:00:19.265 ***** 2025-11-08 13:42:46.346286 | orchestrator | ok: [testbed-manager] 2025-11-08 13:42:46.346289 | orchestrator | 2025-11-08 13:42:46.346293 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-11-08 13:42:46.346297 | orchestrator | Saturday 08 November 2025 13:42:44 +0000 (0:00:02.686) 0:00:21.951 ***** 2025-11-08 13:42:46.346300 | orchestrator | changed: [testbed-manager] 2025-11-08 13:42:46.346304 | orchestrator | 2025-11-08 13:42:46.346308 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:42:46.346312 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:42:46.346316 | orchestrator | 2025-11-08 13:42:46.346319 | orchestrator | 2025-11-08 13:42:46.346323 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:42:46.346327 | orchestrator | Saturday 08 November 2025 13:42:46 +0000 (0:00:01.422) 0:00:23.373 ***** 2025-11-08 13:42:46.346330 | orchestrator | =============================================================================== 2025-11-08 13:42:46.346334 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.96s 2025-11-08 13:42:46.346338 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.24s 2025-11-08 13:42:46.346342 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.69s 2025-11-08 13:42:46.346345 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.42s 2025-11-08 13:42:46.346349 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.22s 2025-11-08 13:42:46.346363 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.16s 2025-11-08 13:42:46.346367 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.06s 2025-11-08 13:42:46.346370 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.95s 2025-11-08 13:42:46.346374 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2025-11-08 13:42:46.346378 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-11-08 13:42:46.346382 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2025-11-08 13:42:46.346389 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2025-11-08 13:42:46.619020 | orchestrator | 2025-11-08 13:42:46.623468 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Nov 8 13:42:46 UTC 2025 2025-11-08 13:42:46.623517 | orchestrator | 2025-11-08 13:42:48.471893 | orchestrator | 2025-11-08 13:42:48 | INFO  | Collection nutshell is prepared for execution 2025-11-08 13:42:48.471989 | orchestrator | 2025-11-08 13:42:48 | INFO  | A [0] - dotfiles 2025-11-08 13:42:58.482481 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [0] - homer 2025-11-08 13:42:58.482581 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [0] - netdata 2025-11-08 13:42:58.482594 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [0] - openstackclient 2025-11-08 13:42:58.482604 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [0] - phpmyadmin 2025-11-08 13:42:58.482614 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [0] - common 2025-11-08 13:42:58.486757 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [1] -- loadbalancer 2025-11-08 13:42:58.487182 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [2] --- opensearch 2025-11-08 13:42:58.487317 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [2] --- mariadb-ng 2025-11-08 13:42:58.487777 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [3] ---- horizon 2025-11-08 13:42:58.488174 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [3] ---- keystone 2025-11-08 13:42:58.488489 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [4] ----- neutron 2025-11-08 13:42:58.488773 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [5] ------ wait-for-nova 2025-11-08 13:42:58.489068 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [6] ------- octavia 2025-11-08 13:42:58.490605 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [4] ----- barbican 2025-11-08 13:42:58.490863 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [4] ----- designate 2025-11-08 13:42:58.491232 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [4] ----- ironic 2025-11-08 13:42:58.491338 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [4] ----- placement 2025-11-08 13:42:58.491849 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [4] ----- magnum 2025-11-08 13:42:58.492291 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [1] -- openvswitch 2025-11-08 13:42:58.492648 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [2] --- ovn 2025-11-08 13:42:58.492933 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [1] -- memcached 2025-11-08 13:42:58.493209 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [1] -- redis 2025-11-08 13:42:58.493407 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [1] -- rabbitmq-ng 2025-11-08 13:42:58.493713 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [0] - kubernetes 2025-11-08 13:42:58.496260 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [1] -- kubeconfig 2025-11-08 13:42:58.496283 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [1] -- copy-kubeconfig 2025-11-08 13:42:58.496537 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [0] - ceph 2025-11-08 13:42:58.498466 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [1] -- ceph-pools 2025-11-08 13:42:58.499060 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [2] --- copy-ceph-keys 2025-11-08 13:42:58.499080 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [3] ---- cephclient 2025-11-08 13:42:58.499091 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2025-11-08 13:42:58.499309 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [4] ----- wait-for-keystone 2025-11-08 13:42:58.499328 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [5] ------ kolla-ceph-rgw 2025-11-08 13:42:58.499368 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [5] ------ glance 2025-11-08 13:42:58.499380 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [5] ------ cinder 2025-11-08 13:42:58.499392 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [5] ------ nova 2025-11-08 13:42:58.499725 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [4] ----- prometheus 2025-11-08 13:42:58.499746 | orchestrator | 2025-11-08 13:42:58 | INFO  | A [5] ------ grafana 2025-11-08 13:42:58.732235 | orchestrator | 2025-11-08 13:42:58 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-11-08 13:42:58.732324 | orchestrator | 2025-11-08 13:42:58 | INFO  | Tasks are running in the background 2025-11-08 13:43:01.837578 | orchestrator | 2025-11-08 13:43:01 | INFO  | No task IDs specified, wait for all currently running tasks 2025-11-08 13:43:03.964049 | orchestrator | 2025-11-08 13:43:03 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:03.966730 | orchestrator | 2025-11-08 13:43:03 | INFO  | Task e48c4929-cd66-4c87-9a18-8d76084f9d9c is in state STARTED 2025-11-08 13:43:03.966927 | orchestrator | 2025-11-08 13:43:03 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:03.971206 | orchestrator | 2025-11-08 13:43:03 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:03.971711 | orchestrator | 2025-11-08 13:43:03 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:03.972202 | orchestrator | 2025-11-08 13:43:03 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:03.972886 | orchestrator | 2025-11-08 13:43:03 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:03.972909 | orchestrator | 2025-11-08 13:43:03 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:07.082935 | orchestrator | 2025-11-08 13:43:07 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:07.084333 | orchestrator | 2025-11-08 13:43:07 | INFO  | Task e48c4929-cd66-4c87-9a18-8d76084f9d9c is in state STARTED 2025-11-08 13:43:07.084377 | orchestrator | 2025-11-08 13:43:07 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:07.084390 | orchestrator | 2025-11-08 13:43:07 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:07.086176 | orchestrator | 2025-11-08 13:43:07 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:07.086225 | orchestrator | 2025-11-08 13:43:07 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:07.086237 | orchestrator | 2025-11-08 13:43:07 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:07.086249 | orchestrator | 2025-11-08 13:43:07 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:10.121013 | orchestrator | 2025-11-08 13:43:10 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:10.121217 | orchestrator | 2025-11-08 13:43:10 | INFO  | Task e48c4929-cd66-4c87-9a18-8d76084f9d9c is in state STARTED 2025-11-08 13:43:10.121835 | orchestrator | 2025-11-08 13:43:10 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:10.123553 | orchestrator | 2025-11-08 13:43:10 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:10.127125 | orchestrator | 2025-11-08 13:43:10 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:10.127151 | orchestrator | 2025-11-08 13:43:10 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:10.130345 | orchestrator | 2025-11-08 13:43:10 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:10.130370 | orchestrator | 2025-11-08 13:43:10 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:13.219893 | orchestrator | 2025-11-08 13:43:13 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:13.219998 | orchestrator | 2025-11-08 13:43:13 | INFO  | Task e48c4929-cd66-4c87-9a18-8d76084f9d9c is in state STARTED 2025-11-08 13:43:13.220014 | orchestrator | 2025-11-08 13:43:13 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:13.220026 | orchestrator | 2025-11-08 13:43:13 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:13.220037 | orchestrator | 2025-11-08 13:43:13 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:13.220048 | orchestrator | 2025-11-08 13:43:13 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:13.220059 | orchestrator | 2025-11-08 13:43:13 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:13.220070 | orchestrator | 2025-11-08 13:43:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:16.243284 | orchestrator | 2025-11-08 13:43:16 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:16.243950 | orchestrator | 2025-11-08 13:43:16 | INFO  | Task e48c4929-cd66-4c87-9a18-8d76084f9d9c is in state STARTED 2025-11-08 13:43:16.244342 | orchestrator | 2025-11-08 13:43:16 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:16.247105 | orchestrator | 2025-11-08 13:43:16 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:16.249648 | orchestrator | 2025-11-08 13:43:16 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:16.250145 | orchestrator | 2025-11-08 13:43:16 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:16.251479 | orchestrator | 2025-11-08 13:43:16 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:16.255069 | orchestrator | 2025-11-08 13:43:16 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:19.296613 | orchestrator | 2025-11-08 13:43:19 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:19.296757 | orchestrator | 2025-11-08 13:43:19 | INFO  | Task e48c4929-cd66-4c87-9a18-8d76084f9d9c is in state STARTED 2025-11-08 13:43:19.296771 | orchestrator | 2025-11-08 13:43:19 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:19.296778 | orchestrator | 2025-11-08 13:43:19 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:19.299415 | orchestrator | 2025-11-08 13:43:19 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:19.299461 | orchestrator | 2025-11-08 13:43:19 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:19.299468 | orchestrator | 2025-11-08 13:43:19 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:19.299473 | orchestrator | 2025-11-08 13:43:19 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:22.461646 | orchestrator | 2025-11-08 13:43:22 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:22.461794 | orchestrator | 2025-11-08 13:43:22 | INFO  | Task e48c4929-cd66-4c87-9a18-8d76084f9d9c is in state STARTED 2025-11-08 13:43:22.461838 | orchestrator | 2025-11-08 13:43:22 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:22.461850 | orchestrator | 2025-11-08 13:43:22 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:22.461861 | orchestrator | 2025-11-08 13:43:22 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:22.461872 | orchestrator | 2025-11-08 13:43:22 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:22.461883 | orchestrator | 2025-11-08 13:43:22 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:22.461894 | orchestrator | 2025-11-08 13:43:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:25.577725 | orchestrator | 2025-11-08 13:43:25.577858 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-11-08 13:43:25.577876 | orchestrator | 2025-11-08 13:43:25.577888 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-11-08 13:43:25.577900 | orchestrator | Saturday 08 November 2025 13:43:10 +0000 (0:00:00.937) 0:00:00.937 ***** 2025-11-08 13:43:25.577912 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:43:25.577924 | orchestrator | changed: [testbed-manager] 2025-11-08 13:43:25.577934 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:43:25.577945 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:43:25.577956 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:43:25.577966 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:43:25.577977 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:43:25.577988 | orchestrator | 2025-11-08 13:43:25.577999 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-11-08 13:43:25.578010 | orchestrator | Saturday 08 November 2025 13:43:15 +0000 (0:00:04.929) 0:00:05.867 ***** 2025-11-08 13:43:25.578077 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-11-08 13:43:25.578090 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-11-08 13:43:25.578100 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-11-08 13:43:25.578112 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-11-08 13:43:25.578123 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-11-08 13:43:25.578134 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-11-08 13:43:25.578145 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-11-08 13:43:25.578156 | orchestrator | 2025-11-08 13:43:25.578169 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-11-08 13:43:25.578190 | orchestrator | Saturday 08 November 2025 13:43:16 +0000 (0:00:01.100) 0:00:06.968 ***** 2025-11-08 13:43:25.578208 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-08 13:43:16.535297', 'end': '2025-11-08 13:43:16.543393', 'delta': '0:00:00.008096', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-08 13:43:25.578241 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-08 13:43:16.498873', 'end': '2025-11-08 13:43:16.507467', 'delta': '0:00:00.008594', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-08 13:43:25.578279 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-08 13:43:16.605453', 'end': '2025-11-08 13:43:16.609210', 'delta': '0:00:00.003757', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-08 13:43:25.578322 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-08 13:43:16.533160', 'end': '2025-11-08 13:43:16.542075', 'delta': '0:00:00.008915', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-08 13:43:25.578335 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-08 13:43:16.525038', 'end': '2025-11-08 13:43:16.532579', 'delta': '0:00:00.007541', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-08 13:43:25.578347 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-08 13:43:16.541041', 'end': '2025-11-08 13:43:16.549791', 'delta': '0:00:00.008750', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-08 13:43:25.578372 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-08 13:43:16.531392', 'end': '2025-11-08 13:43:16.540398', 'delta': '0:00:00.009006', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-08 13:43:25.578397 | orchestrator | 2025-11-08 13:43:25.578409 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-11-08 13:43:25.578420 | orchestrator | Saturday 08 November 2025 13:43:19 +0000 (0:00:02.267) 0:00:09.236 ***** 2025-11-08 13:43:25.578430 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-11-08 13:43:25.578441 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-11-08 13:43:25.578452 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-11-08 13:43:25.578462 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-11-08 13:43:25.578473 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-11-08 13:43:25.578484 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-11-08 13:43:25.578494 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-11-08 13:43:25.578505 | orchestrator | 2025-11-08 13:43:25.578516 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-11-08 13:43:25.578526 | orchestrator | Saturday 08 November 2025 13:43:21 +0000 (0:00:02.357) 0:00:11.594 ***** 2025-11-08 13:43:25.578537 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-11-08 13:43:25.578548 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-11-08 13:43:25.578559 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-11-08 13:43:25.578569 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-11-08 13:43:25.578580 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-11-08 13:43:25.578591 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-11-08 13:43:25.578602 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-11-08 13:43:25.578616 | orchestrator | 2025-11-08 13:43:25.578634 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:43:25.578663 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:43:25.578683 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:43:25.578737 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:43:25.578758 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:43:25.578777 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:43:25.578795 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:43:25.578808 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:43:25.578819 | orchestrator | 2025-11-08 13:43:25.578829 | orchestrator | 2025-11-08 13:43:25.578840 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:43:25.578851 | orchestrator | Saturday 08 November 2025 13:43:24 +0000 (0:00:02.670) 0:00:14.264 ***** 2025-11-08 13:43:25.578863 | orchestrator | =============================================================================== 2025-11-08 13:43:25.578892 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.93s 2025-11-08 13:43:25.578903 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.67s 2025-11-08 13:43:25.578914 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.36s 2025-11-08 13:43:25.578925 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.27s 2025-11-08 13:43:25.578935 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.10s 2025-11-08 13:43:25.608962 | orchestrator | 2025-11-08 13:43:25 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:25.609048 | orchestrator | 2025-11-08 13:43:25 | INFO  | Task e48c4929-cd66-4c87-9a18-8d76084f9d9c is in state SUCCESS 2025-11-08 13:43:25.609061 | orchestrator | 2025-11-08 13:43:25 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:25.609072 | orchestrator | 2025-11-08 13:43:25 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:25.609101 | orchestrator | 2025-11-08 13:43:25 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:25.609112 | orchestrator | 2025-11-08 13:43:25 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:25.609123 | orchestrator | 2025-11-08 13:43:25 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:25.609133 | orchestrator | 2025-11-08 13:43:25 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:25.609145 | orchestrator | 2025-11-08 13:43:25 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:28.572671 | orchestrator | 2025-11-08 13:43:28 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:28.573921 | orchestrator | 2025-11-08 13:43:28 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:28.574598 | orchestrator | 2025-11-08 13:43:28 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:28.577309 | orchestrator | 2025-11-08 13:43:28 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:28.577333 | orchestrator | 2025-11-08 13:43:28 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:28.577344 | orchestrator | 2025-11-08 13:43:28 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:28.577610 | orchestrator | 2025-11-08 13:43:28 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:28.577832 | orchestrator | 2025-11-08 13:43:28 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:31.682668 | orchestrator | 2025-11-08 13:43:31 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:31.682832 | orchestrator | 2025-11-08 13:43:31 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:31.682847 | orchestrator | 2025-11-08 13:43:31 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:31.682859 | orchestrator | 2025-11-08 13:43:31 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:31.682870 | orchestrator | 2025-11-08 13:43:31 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:31.682880 | orchestrator | 2025-11-08 13:43:31 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:31.682891 | orchestrator | 2025-11-08 13:43:31 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:31.682925 | orchestrator | 2025-11-08 13:43:31 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:34.700560 | orchestrator | 2025-11-08 13:43:34 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:34.700682 | orchestrator | 2025-11-08 13:43:34 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:34.701137 | orchestrator | 2025-11-08 13:43:34 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:34.701797 | orchestrator | 2025-11-08 13:43:34 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:34.702146 | orchestrator | 2025-11-08 13:43:34 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:34.702789 | orchestrator | 2025-11-08 13:43:34 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:34.703361 | orchestrator | 2025-11-08 13:43:34 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:34.703454 | orchestrator | 2025-11-08 13:43:34 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:37.859053 | orchestrator | 2025-11-08 13:43:37 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:37.860732 | orchestrator | 2025-11-08 13:43:37 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:37.860803 | orchestrator | 2025-11-08 13:43:37 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:37.860812 | orchestrator | 2025-11-08 13:43:37 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:37.862536 | orchestrator | 2025-11-08 13:43:37 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:37.862562 | orchestrator | 2025-11-08 13:43:37 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:37.862569 | orchestrator | 2025-11-08 13:43:37 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:37.862578 | orchestrator | 2025-11-08 13:43:37 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:40.888616 | orchestrator | 2025-11-08 13:43:40 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:40.889057 | orchestrator | 2025-11-08 13:43:40 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:40.891157 | orchestrator | 2025-11-08 13:43:40 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:40.892208 | orchestrator | 2025-11-08 13:43:40 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:40.893496 | orchestrator | 2025-11-08 13:43:40 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:40.894897 | orchestrator | 2025-11-08 13:43:40 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:40.895746 | orchestrator | 2025-11-08 13:43:40 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:40.895776 | orchestrator | 2025-11-08 13:43:40 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:44.185252 | orchestrator | 2025-11-08 13:43:44 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:44.185379 | orchestrator | 2025-11-08 13:43:44 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:44.185406 | orchestrator | 2025-11-08 13:43:44 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:44.185465 | orchestrator | 2025-11-08 13:43:44 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:44.185485 | orchestrator | 2025-11-08 13:43:44 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state STARTED 2025-11-08 13:43:44.185503 | orchestrator | 2025-11-08 13:43:44 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:44.185962 | orchestrator | 2025-11-08 13:43:44 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:44.185993 | orchestrator | 2025-11-08 13:43:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:47.117784 | orchestrator | 2025-11-08 13:43:47 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:47.122432 | orchestrator | 2025-11-08 13:43:47 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:47.122491 | orchestrator | 2025-11-08 13:43:47 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:47.122504 | orchestrator | 2025-11-08 13:43:47 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:47.124797 | orchestrator | 2025-11-08 13:43:47 | INFO  | Task 51110d75-8664-4632-a283-f454f8320543 is in state SUCCESS 2025-11-08 13:43:47.126117 | orchestrator | 2025-11-08 13:43:47 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:47.127336 | orchestrator | 2025-11-08 13:43:47 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:47.127361 | orchestrator | 2025-11-08 13:43:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:50.192756 | orchestrator | 2025-11-08 13:43:50 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:50.194114 | orchestrator | 2025-11-08 13:43:50 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:50.194542 | orchestrator | 2025-11-08 13:43:50 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:50.195191 | orchestrator | 2025-11-08 13:43:50 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:50.195894 | orchestrator | 2025-11-08 13:43:50 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:50.196464 | orchestrator | 2025-11-08 13:43:50 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:50.196496 | orchestrator | 2025-11-08 13:43:50 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:53.258943 | orchestrator | 2025-11-08 13:43:53 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:53.264234 | orchestrator | 2025-11-08 13:43:53 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:53.264633 | orchestrator | 2025-11-08 13:43:53 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:53.265383 | orchestrator | 2025-11-08 13:43:53 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:53.266485 | orchestrator | 2025-11-08 13:43:53 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:53.267028 | orchestrator | 2025-11-08 13:43:53 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:53.267065 | orchestrator | 2025-11-08 13:43:53 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:56.351521 | orchestrator | 2025-11-08 13:43:56 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:56.351682 | orchestrator | 2025-11-08 13:43:56 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state STARTED 2025-11-08 13:43:56.351904 | orchestrator | 2025-11-08 13:43:56 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:56.354457 | orchestrator | 2025-11-08 13:43:56 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:56.358385 | orchestrator | 2025-11-08 13:43:56 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:56.360638 | orchestrator | 2025-11-08 13:43:56 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:56.360661 | orchestrator | 2025-11-08 13:43:56 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:43:59.492538 | orchestrator | 2025-11-08 13:43:59 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:43:59.492786 | orchestrator | 2025-11-08 13:43:59 | INFO  | Task e3f86054-8a50-4dfe-8f63-53da849d4889 is in state SUCCESS 2025-11-08 13:43:59.492806 | orchestrator | 2025-11-08 13:43:59 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:43:59.492818 | orchestrator | 2025-11-08 13:43:59 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:43:59.492830 | orchestrator | 2025-11-08 13:43:59 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:43:59.492855 | orchestrator | 2025-11-08 13:43:59 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:43:59.492867 | orchestrator | 2025-11-08 13:43:59 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:02.572496 | orchestrator | 2025-11-08 13:44:02 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:02.573939 | orchestrator | 2025-11-08 13:44:02 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:02.576820 | orchestrator | 2025-11-08 13:44:02 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:44:02.580350 | orchestrator | 2025-11-08 13:44:02 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:02.582806 | orchestrator | 2025-11-08 13:44:02 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:02.582828 | orchestrator | 2025-11-08 13:44:02 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:05.656941 | orchestrator | 2025-11-08 13:44:05 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:05.666945 | orchestrator | 2025-11-08 13:44:05 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:05.685029 | orchestrator | 2025-11-08 13:44:05 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:44:05.687919 | orchestrator | 2025-11-08 13:44:05 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:05.692749 | orchestrator | 2025-11-08 13:44:05 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:05.693141 | orchestrator | 2025-11-08 13:44:05 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:08.769749 | orchestrator | 2025-11-08 13:44:08 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:08.774463 | orchestrator | 2025-11-08 13:44:08 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:08.774514 | orchestrator | 2025-11-08 13:44:08 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:44:08.774554 | orchestrator | 2025-11-08 13:44:08 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:08.774566 | orchestrator | 2025-11-08 13:44:08 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:08.774577 | orchestrator | 2025-11-08 13:44:08 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:11.991103 | orchestrator | 2025-11-08 13:44:11 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:11.991212 | orchestrator | 2025-11-08 13:44:11 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:11.991802 | orchestrator | 2025-11-08 13:44:11 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:44:11.993195 | orchestrator | 2025-11-08 13:44:11 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:11.994741 | orchestrator | 2025-11-08 13:44:11 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:11.994765 | orchestrator | 2025-11-08 13:44:11 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:15.047830 | orchestrator | 2025-11-08 13:44:15 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:15.047949 | orchestrator | 2025-11-08 13:44:15 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:15.065671 | orchestrator | 2025-11-08 13:44:15 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:44:15.067862 | orchestrator | 2025-11-08 13:44:15 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:15.071351 | orchestrator | 2025-11-08 13:44:15 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:15.071396 | orchestrator | 2025-11-08 13:44:15 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:18.114903 | orchestrator | 2025-11-08 13:44:18 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:18.122302 | orchestrator | 2025-11-08 13:44:18 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:18.127564 | orchestrator | 2025-11-08 13:44:18 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state STARTED 2025-11-08 13:44:18.128685 | orchestrator | 2025-11-08 13:44:18 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:18.129801 | orchestrator | 2025-11-08 13:44:18 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:18.129834 | orchestrator | 2025-11-08 13:44:18 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:21.186623 | orchestrator | 2025-11-08 13:44:21 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:21.190938 | orchestrator | 2025-11-08 13:44:21 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:21.191500 | orchestrator | 2025-11-08 13:44:21 | INFO  | Task 669421fe-bdfd-45f7-9b7a-0331b70caefd is in state SUCCESS 2025-11-08 13:44:21.193033 | orchestrator | 2025-11-08 13:44:21.193071 | orchestrator | 2025-11-08 13:44:21.193079 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-11-08 13:44:21.193086 | orchestrator | 2025-11-08 13:44:21.193093 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-11-08 13:44:21.193101 | orchestrator | Saturday 08 November 2025 13:43:12 +0000 (0:00:00.529) 0:00:00.529 ***** 2025-11-08 13:44:21.193108 | orchestrator | ok: [testbed-manager] => { 2025-11-08 13:44:21.193117 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-11-08 13:44:21.193143 | orchestrator | } 2025-11-08 13:44:21.193150 | orchestrator | 2025-11-08 13:44:21.193157 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-11-08 13:44:21.193164 | orchestrator | Saturday 08 November 2025 13:43:12 +0000 (0:00:00.457) 0:00:00.986 ***** 2025-11-08 13:44:21.193170 | orchestrator | ok: [testbed-manager] 2025-11-08 13:44:21.193178 | orchestrator | 2025-11-08 13:44:21.193184 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-11-08 13:44:21.193191 | orchestrator | Saturday 08 November 2025 13:43:13 +0000 (0:00:01.303) 0:00:02.290 ***** 2025-11-08 13:44:21.193198 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-11-08 13:44:21.193205 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-11-08 13:44:21.193212 | orchestrator | 2025-11-08 13:44:21.193218 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-11-08 13:44:21.193224 | orchestrator | Saturday 08 November 2025 13:43:15 +0000 (0:00:01.096) 0:00:03.387 ***** 2025-11-08 13:44:21.193230 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.193236 | orchestrator | 2025-11-08 13:44:21.193243 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-11-08 13:44:21.193249 | orchestrator | Saturday 08 November 2025 13:43:17 +0000 (0:00:02.084) 0:00:05.472 ***** 2025-11-08 13:44:21.193255 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.193262 | orchestrator | 2025-11-08 13:44:21.193268 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-11-08 13:44:21.193275 | orchestrator | Saturday 08 November 2025 13:43:18 +0000 (0:00:01.757) 0:00:07.229 ***** 2025-11-08 13:44:21.193281 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-11-08 13:44:21.193287 | orchestrator | ok: [testbed-manager] 2025-11-08 13:44:21.193294 | orchestrator | 2025-11-08 13:44:21.193300 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-11-08 13:44:21.193306 | orchestrator | Saturday 08 November 2025 13:43:43 +0000 (0:00:24.926) 0:00:32.156 ***** 2025-11-08 13:44:21.193313 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.193319 | orchestrator | 2025-11-08 13:44:21.193326 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:44:21.193333 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:44:21.193341 | orchestrator | 2025-11-08 13:44:21.193347 | orchestrator | 2025-11-08 13:44:21.193354 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:44:21.193361 | orchestrator | Saturday 08 November 2025 13:43:46 +0000 (0:00:02.307) 0:00:34.463 ***** 2025-11-08 13:44:21.193367 | orchestrator | =============================================================================== 2025-11-08 13:44:21.193374 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.93s 2025-11-08 13:44:21.193381 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.31s 2025-11-08 13:44:21.193387 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.08s 2025-11-08 13:44:21.193394 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.76s 2025-11-08 13:44:21.193401 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.30s 2025-11-08 13:44:21.193407 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.10s 2025-11-08 13:44:21.193414 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.46s 2025-11-08 13:44:21.193420 | orchestrator | 2025-11-08 13:44:21.193427 | orchestrator | 2025-11-08 13:44:21.193434 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-11-08 13:44:21.193464 | orchestrator | 2025-11-08 13:44:21.193471 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-11-08 13:44:21.193479 | orchestrator | Saturday 08 November 2025 13:43:13 +0000 (0:00:00.590) 0:00:00.590 ***** 2025-11-08 13:44:21.193494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-11-08 13:44:21.193504 | orchestrator | 2025-11-08 13:44:21.193510 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-11-08 13:44:21.193518 | orchestrator | Saturday 08 November 2025 13:43:13 +0000 (0:00:00.456) 0:00:01.047 ***** 2025-11-08 13:44:21.193525 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-11-08 13:44:21.193532 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-11-08 13:44:21.193538 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-11-08 13:44:21.193545 | orchestrator | 2025-11-08 13:44:21.193552 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-11-08 13:44:21.193559 | orchestrator | Saturday 08 November 2025 13:43:14 +0000 (0:00:01.503) 0:00:02.550 ***** 2025-11-08 13:44:21.193566 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.193573 | orchestrator | 2025-11-08 13:44:21.193580 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-11-08 13:44:21.193587 | orchestrator | Saturday 08 November 2025 13:43:17 +0000 (0:00:02.355) 0:00:04.905 ***** 2025-11-08 13:44:21.193611 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-11-08 13:44:21.193620 | orchestrator | ok: [testbed-manager] 2025-11-08 13:44:21.193628 | orchestrator | 2025-11-08 13:44:21.193635 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-11-08 13:44:21.193643 | orchestrator | Saturday 08 November 2025 13:43:49 +0000 (0:00:32.439) 0:00:37.345 ***** 2025-11-08 13:44:21.193651 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.193658 | orchestrator | 2025-11-08 13:44:21.193666 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-11-08 13:44:21.193673 | orchestrator | Saturday 08 November 2025 13:43:50 +0000 (0:00:00.838) 0:00:38.184 ***** 2025-11-08 13:44:21.193681 | orchestrator | ok: [testbed-manager] 2025-11-08 13:44:21.193689 | orchestrator | 2025-11-08 13:44:21.193716 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-11-08 13:44:21.193723 | orchestrator | Saturday 08 November 2025 13:43:51 +0000 (0:00:01.137) 0:00:39.321 ***** 2025-11-08 13:44:21.193730 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.193738 | orchestrator | 2025-11-08 13:44:21.193745 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-11-08 13:44:21.193752 | orchestrator | Saturday 08 November 2025 13:43:54 +0000 (0:00:02.359) 0:00:41.681 ***** 2025-11-08 13:44:21.193759 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.193767 | orchestrator | 2025-11-08 13:44:21.193774 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-11-08 13:44:21.193780 | orchestrator | Saturday 08 November 2025 13:43:55 +0000 (0:00:01.352) 0:00:43.034 ***** 2025-11-08 13:44:21.193787 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.193793 | orchestrator | 2025-11-08 13:44:21.193800 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-11-08 13:44:21.193805 | orchestrator | Saturday 08 November 2025 13:43:55 +0000 (0:00:00.473) 0:00:43.507 ***** 2025-11-08 13:44:21.193811 | orchestrator | ok: [testbed-manager] 2025-11-08 13:44:21.193817 | orchestrator | 2025-11-08 13:44:21.193823 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:44:21.193829 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:44:21.193835 | orchestrator | 2025-11-08 13:44:21.193840 | orchestrator | 2025-11-08 13:44:21.193846 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:44:21.193853 | orchestrator | Saturday 08 November 2025 13:43:56 +0000 (0:00:00.316) 0:00:43.823 ***** 2025-11-08 13:44:21.193859 | orchestrator | =============================================================================== 2025-11-08 13:44:21.193871 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.44s 2025-11-08 13:44:21.193877 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.36s 2025-11-08 13:44:21.193884 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.36s 2025-11-08 13:44:21.193890 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.50s 2025-11-08 13:44:21.193896 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.35s 2025-11-08 13:44:21.193903 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.14s 2025-11-08 13:44:21.193909 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.84s 2025-11-08 13:44:21.193915 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.47s 2025-11-08 13:44:21.193922 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.46s 2025-11-08 13:44:21.193928 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.32s 2025-11-08 13:44:21.193934 | orchestrator | 2025-11-08 13:44:21.193940 | orchestrator | 2025-11-08 13:44:21.193947 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:44:21.193953 | orchestrator | 2025-11-08 13:44:21.193959 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:44:21.193966 | orchestrator | Saturday 08 November 2025 13:43:10 +0000 (0:00:00.676) 0:00:00.676 ***** 2025-11-08 13:44:21.193972 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-11-08 13:44:21.193978 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-11-08 13:44:21.193985 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-11-08 13:44:21.193991 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-11-08 13:44:21.193997 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-11-08 13:44:21.194004 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-11-08 13:44:21.194010 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-11-08 13:44:21.194069 | orchestrator | 2025-11-08 13:44:21.194077 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-11-08 13:44:21.194084 | orchestrator | 2025-11-08 13:44:21.194091 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-11-08 13:44:21.194098 | orchestrator | Saturday 08 November 2025 13:43:13 +0000 (0:00:02.708) 0:00:03.385 ***** 2025-11-08 13:44:21.194115 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:44:21.194127 | orchestrator | 2025-11-08 13:44:21.194135 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-11-08 13:44:21.194142 | orchestrator | Saturday 08 November 2025 13:43:14 +0000 (0:00:01.402) 0:00:04.787 ***** 2025-11-08 13:44:21.194148 | orchestrator | ok: [testbed-manager] 2025-11-08 13:44:21.194155 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:44:21.194162 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:44:21.194172 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:44:21.194179 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:44:21.194193 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:44:21.194199 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:44:21.194206 | orchestrator | 2025-11-08 13:44:21.194213 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-11-08 13:44:21.194220 | orchestrator | Saturday 08 November 2025 13:43:16 +0000 (0:00:02.198) 0:00:06.985 ***** 2025-11-08 13:44:21.194226 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:44:21.194233 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:44:21.194239 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:44:21.194246 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:44:21.194258 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:44:21.194264 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:44:21.194271 | orchestrator | ok: [testbed-manager] 2025-11-08 13:44:21.194278 | orchestrator | 2025-11-08 13:44:21.194284 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-11-08 13:44:21.194291 | orchestrator | Saturday 08 November 2025 13:43:19 +0000 (0:00:02.819) 0:00:09.805 ***** 2025-11-08 13:44:21.194298 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:44:21.194305 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:44:21.194311 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:44:21.194318 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.194324 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:44:21.194331 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:44:21.194338 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:44:21.194345 | orchestrator | 2025-11-08 13:44:21.194351 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-11-08 13:44:21.194358 | orchestrator | Saturday 08 November 2025 13:43:22 +0000 (0:00:02.803) 0:00:12.609 ***** 2025-11-08 13:44:21.194365 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:44:21.194372 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:44:21.194379 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:44:21.194385 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:44:21.194392 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:44:21.194399 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:44:21.194406 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.194412 | orchestrator | 2025-11-08 13:44:21.194419 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-11-08 13:44:21.194426 | orchestrator | Saturday 08 November 2025 13:43:34 +0000 (0:00:12.246) 0:00:24.855 ***** 2025-11-08 13:44:21.194433 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:44:21.194440 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:44:21.194447 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:44:21.194453 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:44:21.194460 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:44:21.194467 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:44:21.194473 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.194480 | orchestrator | 2025-11-08 13:44:21.194487 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-11-08 13:44:21.194494 | orchestrator | Saturday 08 November 2025 13:43:56 +0000 (0:00:22.087) 0:00:46.943 ***** 2025-11-08 13:44:21.194501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:44:21.194510 | orchestrator | 2025-11-08 13:44:21.194516 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-11-08 13:44:21.194523 | orchestrator | Saturday 08 November 2025 13:43:57 +0000 (0:00:01.149) 0:00:48.092 ***** 2025-11-08 13:44:21.194530 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-11-08 13:44:21.194537 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-11-08 13:44:21.194544 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-11-08 13:44:21.194551 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-11-08 13:44:21.194557 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-11-08 13:44:21.194564 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-11-08 13:44:21.194571 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-11-08 13:44:21.194577 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-11-08 13:44:21.194584 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-11-08 13:44:21.194591 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-11-08 13:44:21.194598 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-11-08 13:44:21.194604 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-11-08 13:44:21.194616 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-11-08 13:44:21.194623 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-11-08 13:44:21.194630 | orchestrator | 2025-11-08 13:44:21.194637 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-11-08 13:44:21.194645 | orchestrator | Saturday 08 November 2025 13:44:03 +0000 (0:00:05.630) 0:00:53.722 ***** 2025-11-08 13:44:21.194652 | orchestrator | ok: [testbed-manager] 2025-11-08 13:44:21.194659 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:44:21.194666 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:44:21.194673 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:44:21.194679 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:44:21.194686 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:44:21.194707 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:44:21.194715 | orchestrator | 2025-11-08 13:44:21.194720 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-11-08 13:44:21.194727 | orchestrator | Saturday 08 November 2025 13:44:04 +0000 (0:00:01.520) 0:00:55.243 ***** 2025-11-08 13:44:21.194733 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:44:21.194739 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.194745 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:44:21.194752 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:44:21.194758 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:44:21.194764 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:44:21.194771 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:44:21.194777 | orchestrator | 2025-11-08 13:44:21.194784 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-11-08 13:44:21.194797 | orchestrator | Saturday 08 November 2025 13:44:06 +0000 (0:00:01.540) 0:00:56.784 ***** 2025-11-08 13:44:21.194803 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:44:21.194809 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:44:21.194814 | orchestrator | ok: [testbed-manager] 2025-11-08 13:44:21.194820 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:44:21.194826 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:44:21.194831 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:44:21.194837 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:44:21.194844 | orchestrator | 2025-11-08 13:44:21.194851 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-11-08 13:44:21.194857 | orchestrator | Saturday 08 November 2025 13:44:09 +0000 (0:00:03.155) 0:00:59.939 ***** 2025-11-08 13:44:21.194864 | orchestrator | ok: [testbed-manager] 2025-11-08 13:44:21.194870 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:44:21.194877 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:44:21.194883 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:44:21.194889 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:44:21.194896 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:44:21.194903 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:44:21.194910 | orchestrator | 2025-11-08 13:44:21.194917 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-11-08 13:44:21.194924 | orchestrator | Saturday 08 November 2025 13:44:12 +0000 (0:00:02.827) 0:01:02.767 ***** 2025-11-08 13:44:21.194931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-11-08 13:44:21.194939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:44:21.194946 | orchestrator | 2025-11-08 13:44:21.194953 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-11-08 13:44:21.194959 | orchestrator | Saturday 08 November 2025 13:44:13 +0000 (0:00:01.454) 0:01:04.223 ***** 2025-11-08 13:44:21.194966 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.194972 | orchestrator | 2025-11-08 13:44:21.194979 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-11-08 13:44:21.194990 | orchestrator | Saturday 08 November 2025 13:44:16 +0000 (0:00:02.146) 0:01:06.369 ***** 2025-11-08 13:44:21.194997 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:44:21.195003 | orchestrator | changed: [testbed-manager] 2025-11-08 13:44:21.195010 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:44:21.195016 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:44:21.195022 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:44:21.195029 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:44:21.195035 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:44:21.195042 | orchestrator | 2025-11-08 13:44:21.195048 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:44:21.195055 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:44:21.195062 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:44:21.195068 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:44:21.195075 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:44:21.195081 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:44:21.195088 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:44:21.195120 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:44:21.195128 | orchestrator | 2025-11-08 13:44:21.195135 | orchestrator | 2025-11-08 13:44:21.195141 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:44:21.195148 | orchestrator | Saturday 08 November 2025 13:44:19 +0000 (0:00:03.706) 0:01:10.076 ***** 2025-11-08 13:44:21.195154 | orchestrator | =============================================================================== 2025-11-08 13:44:21.195161 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 22.09s 2025-11-08 13:44:21.195167 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.25s 2025-11-08 13:44:21.195174 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.63s 2025-11-08 13:44:21.195180 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.71s 2025-11-08 13:44:21.195186 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 3.16s 2025-11-08 13:44:21.195193 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.83s 2025-11-08 13:44:21.195199 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.82s 2025-11-08 13:44:21.195206 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.80s 2025-11-08 13:44:21.195212 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.71s 2025-11-08 13:44:21.195219 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.20s 2025-11-08 13:44:21.195225 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.15s 2025-11-08 13:44:21.195242 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.54s 2025-11-08 13:44:21.195249 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.52s 2025-11-08 13:44:21.195256 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.46s 2025-11-08 13:44:21.195263 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.40s 2025-11-08 13:44:21.195270 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.15s 2025-11-08 13:44:21.199238 | orchestrator | 2025-11-08 13:44:21 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:21.200118 | orchestrator | 2025-11-08 13:44:21 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:21.200140 | orchestrator | 2025-11-08 13:44:21 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:24.257614 | orchestrator | 2025-11-08 13:44:24 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:24.260869 | orchestrator | 2025-11-08 13:44:24 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:24.266073 | orchestrator | 2025-11-08 13:44:24 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:24.268048 | orchestrator | 2025-11-08 13:44:24 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:24.268089 | orchestrator | 2025-11-08 13:44:24 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:27.322404 | orchestrator | 2025-11-08 13:44:27 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:27.323125 | orchestrator | 2025-11-08 13:44:27 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:27.324373 | orchestrator | 2025-11-08 13:44:27 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:27.325243 | orchestrator | 2025-11-08 13:44:27 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:27.325262 | orchestrator | 2025-11-08 13:44:27 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:30.370686 | orchestrator | 2025-11-08 13:44:30 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:30.374470 | orchestrator | 2025-11-08 13:44:30 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:30.374888 | orchestrator | 2025-11-08 13:44:30 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:30.376377 | orchestrator | 2025-11-08 13:44:30 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:30.377023 | orchestrator | 2025-11-08 13:44:30 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:33.425500 | orchestrator | 2025-11-08 13:44:33 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:33.427511 | orchestrator | 2025-11-08 13:44:33 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:33.429730 | orchestrator | 2025-11-08 13:44:33 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:33.431413 | orchestrator | 2025-11-08 13:44:33 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:33.432070 | orchestrator | 2025-11-08 13:44:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:36.461793 | orchestrator | 2025-11-08 13:44:36 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:36.467090 | orchestrator | 2025-11-08 13:44:36 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:36.467437 | orchestrator | 2025-11-08 13:44:36 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:36.470287 | orchestrator | 2025-11-08 13:44:36 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:36.470326 | orchestrator | 2025-11-08 13:44:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:39.497981 | orchestrator | 2025-11-08 13:44:39 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:39.498398 | orchestrator | 2025-11-08 13:44:39 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:39.498447 | orchestrator | 2025-11-08 13:44:39 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:39.499033 | orchestrator | 2025-11-08 13:44:39 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:39.499087 | orchestrator | 2025-11-08 13:44:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:42.531097 | orchestrator | 2025-11-08 13:44:42 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:42.533573 | orchestrator | 2025-11-08 13:44:42 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:42.534620 | orchestrator | 2025-11-08 13:44:42 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:42.535883 | orchestrator | 2025-11-08 13:44:42 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:42.536926 | orchestrator | 2025-11-08 13:44:42 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:45.582793 | orchestrator | 2025-11-08 13:44:45 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:45.584230 | orchestrator | 2025-11-08 13:44:45 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:45.585986 | orchestrator | 2025-11-08 13:44:45 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state STARTED 2025-11-08 13:44:45.587886 | orchestrator | 2025-11-08 13:44:45 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:45.587927 | orchestrator | 2025-11-08 13:44:45 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:48.626340 | orchestrator | 2025-11-08 13:44:48 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:48.626915 | orchestrator | 2025-11-08 13:44:48 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:48.627372 | orchestrator | 2025-11-08 13:44:48 | INFO  | Task 3b5a9628-a2ff-4059-aefd-5db08b46ce1c is in state SUCCESS 2025-11-08 13:44:48.627747 | orchestrator | 2025-11-08 13:44:48 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:48.627767 | orchestrator | 2025-11-08 13:44:48 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:51.672480 | orchestrator | 2025-11-08 13:44:51 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:51.673223 | orchestrator | 2025-11-08 13:44:51 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:51.674665 | orchestrator | 2025-11-08 13:44:51 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:51.675025 | orchestrator | 2025-11-08 13:44:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:54.724162 | orchestrator | 2025-11-08 13:44:54 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:54.724471 | orchestrator | 2025-11-08 13:44:54 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:54.725320 | orchestrator | 2025-11-08 13:44:54 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:54.725336 | orchestrator | 2025-11-08 13:44:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:44:57.766402 | orchestrator | 2025-11-08 13:44:57 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:44:57.767776 | orchestrator | 2025-11-08 13:44:57 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:44:57.770401 | orchestrator | 2025-11-08 13:44:57 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:44:57.770451 | orchestrator | 2025-11-08 13:44:57 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:00.810314 | orchestrator | 2025-11-08 13:45:00 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:45:00.810418 | orchestrator | 2025-11-08 13:45:00 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:00.811367 | orchestrator | 2025-11-08 13:45:00 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:00.811386 | orchestrator | 2025-11-08 13:45:00 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:03.881232 | orchestrator | 2025-11-08 13:45:03 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:45:03.883185 | orchestrator | 2025-11-08 13:45:03 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:03.884585 | orchestrator | 2025-11-08 13:45:03 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:03.884621 | orchestrator | 2025-11-08 13:45:03 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:06.929002 | orchestrator | 2025-11-08 13:45:06 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:45:06.930141 | orchestrator | 2025-11-08 13:45:06 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:06.931903 | orchestrator | 2025-11-08 13:45:06 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:06.932287 | orchestrator | 2025-11-08 13:45:06 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:09.968957 | orchestrator | 2025-11-08 13:45:09 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:45:09.969113 | orchestrator | 2025-11-08 13:45:09 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:09.970658 | orchestrator | 2025-11-08 13:45:09 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:09.970732 | orchestrator | 2025-11-08 13:45:09 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:13.022354 | orchestrator | 2025-11-08 13:45:13 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:45:13.023082 | orchestrator | 2025-11-08 13:45:13 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:13.025271 | orchestrator | 2025-11-08 13:45:13 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:13.025323 | orchestrator | 2025-11-08 13:45:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:16.065940 | orchestrator | 2025-11-08 13:45:16 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:45:16.067855 | orchestrator | 2025-11-08 13:45:16 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:16.069085 | orchestrator | 2025-11-08 13:45:16 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:16.069350 | orchestrator | 2025-11-08 13:45:16 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:19.112834 | orchestrator | 2025-11-08 13:45:19 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state STARTED 2025-11-08 13:45:19.113258 | orchestrator | 2025-11-08 13:45:19 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:19.115320 | orchestrator | 2025-11-08 13:45:19 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:19.115353 | orchestrator | 2025-11-08 13:45:19 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:22.138746 | orchestrator | 2025-11-08 13:45:22 | INFO  | Task fd495618-ba06-47ef-8b58-4c02aabceb11 is in state SUCCESS 2025-11-08 13:45:22.140239 | orchestrator | 2025-11-08 13:45:22.140314 | orchestrator | 2025-11-08 13:45:22.141811 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-11-08 13:45:22.141957 | orchestrator | 2025-11-08 13:45:22.141973 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-11-08 13:45:22.141984 | orchestrator | Saturday 08 November 2025 13:43:29 +0000 (0:00:00.563) 0:00:00.563 ***** 2025-11-08 13:45:22.141996 | orchestrator | ok: [testbed-manager] 2025-11-08 13:45:22.142008 | orchestrator | 2025-11-08 13:45:22.142105 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-11-08 13:45:22.142118 | orchestrator | Saturday 08 November 2025 13:43:30 +0000 (0:00:01.109) 0:00:01.672 ***** 2025-11-08 13:45:22.142130 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-11-08 13:45:22.142142 | orchestrator | 2025-11-08 13:45:22.142153 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-11-08 13:45:22.142164 | orchestrator | Saturday 08 November 2025 13:43:30 +0000 (0:00:00.643) 0:00:02.315 ***** 2025-11-08 13:45:22.142175 | orchestrator | changed: [testbed-manager] 2025-11-08 13:45:22.142186 | orchestrator | 2025-11-08 13:45:22.142197 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-11-08 13:45:22.142208 | orchestrator | Saturday 08 November 2025 13:43:33 +0000 (0:00:02.139) 0:00:04.455 ***** 2025-11-08 13:45:22.142219 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-11-08 13:45:22.142230 | orchestrator | ok: [testbed-manager] 2025-11-08 13:45:22.142241 | orchestrator | 2025-11-08 13:45:22.142252 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-11-08 13:45:22.142263 | orchestrator | Saturday 08 November 2025 13:44:42 +0000 (0:01:09.261) 0:01:13.716 ***** 2025-11-08 13:45:22.142274 | orchestrator | changed: [testbed-manager] 2025-11-08 13:45:22.142285 | orchestrator | 2025-11-08 13:45:22.142298 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:45:22.142309 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:45:22.142322 | orchestrator | 2025-11-08 13:45:22.142333 | orchestrator | 2025-11-08 13:45:22.142344 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:45:22.142355 | orchestrator | Saturday 08 November 2025 13:44:45 +0000 (0:00:03.565) 0:01:17.282 ***** 2025-11-08 13:45:22.142366 | orchestrator | =============================================================================== 2025-11-08 13:45:22.142385 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 69.26s 2025-11-08 13:45:22.142397 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.57s 2025-11-08 13:45:22.142407 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.14s 2025-11-08 13:45:22.142418 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.11s 2025-11-08 13:45:22.142429 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.64s 2025-11-08 13:45:22.142440 | orchestrator | 2025-11-08 13:45:22.142451 | orchestrator | 2025-11-08 13:45:22.142462 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-11-08 13:45:22.142473 | orchestrator | 2025-11-08 13:45:22.142484 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-11-08 13:45:22.142495 | orchestrator | Saturday 08 November 2025 13:43:03 +0000 (0:00:00.239) 0:00:00.239 ***** 2025-11-08 13:45:22.142531 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:45:22.142544 | orchestrator | 2025-11-08 13:45:22.142556 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-11-08 13:45:22.142568 | orchestrator | Saturday 08 November 2025 13:43:04 +0000 (0:00:01.034) 0:00:01.273 ***** 2025-11-08 13:45:22.142581 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-08 13:45:22.142594 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-08 13:45:22.142606 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-08 13:45:22.142618 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-08 13:45:22.142631 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-08 13:45:22.142644 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-08 13:45:22.142657 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-08 13:45:22.142670 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-08 13:45:22.142682 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-08 13:45:22.142693 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-08 13:45:22.142704 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-08 13:45:22.142818 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-08 13:45:22.142836 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-08 13:45:22.142854 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-08 13:45:22.142873 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-08 13:45:22.142893 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-08 13:45:22.142980 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-08 13:45:22.142995 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-08 13:45:22.143007 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-08 13:45:22.143018 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-08 13:45:22.143028 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-08 13:45:22.143039 | orchestrator | 2025-11-08 13:45:22.143050 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-11-08 13:45:22.143061 | orchestrator | Saturday 08 November 2025 13:43:08 +0000 (0:00:03.813) 0:00:05.086 ***** 2025-11-08 13:45:22.143072 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:45:22.143085 | orchestrator | 2025-11-08 13:45:22.143096 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-11-08 13:45:22.143106 | orchestrator | Saturday 08 November 2025 13:43:09 +0000 (0:00:01.168) 0:00:06.255 ***** 2025-11-08 13:45:22.143122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.143155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.143168 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.143180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.143191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.143237 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.143263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.143321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143333 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.143599 | orchestrator | 2025-11-08 13:45:22.143609 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-11-08 13:45:22.143619 | orchestrator | Saturday 08 November 2025 13:43:14 +0000 (0:00:04.534) 0:00:10.790 ***** 2025-11-08 13:45:22.143659 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.143671 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.143689 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.143699 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:45:22.143740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.143753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.143763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.143773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.143784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.143804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.143822 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:45:22.143849 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:45:22.143866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.143884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.143907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.143924 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:45:22.143938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.143949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.143959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.143969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.143988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144015 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:45:22.144025 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:45:22.144052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.144076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144097 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:45:22.144106 | orchestrator | 2025-11-08 13:45:22.144116 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-11-08 13:45:22.144126 | orchestrator | Saturday 08 November 2025 13:43:16 +0000 (0:00:02.327) 0:00:13.118 ***** 2025-11-08 13:45:22.144222 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.144234 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144258 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144269 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:45:22.144279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.144289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.144325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144345 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:45:22.144355 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:45:22.144374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.144390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144411 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:45:22.144421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.144435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144467 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:45:22.144482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.144499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144556 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:45:22.144573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-08 13:45:22.144587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.144614 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:45:22.144630 | orchestrator | 2025-11-08 13:45:22.144654 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-11-08 13:45:22.144671 | orchestrator | Saturday 08 November 2025 13:43:18 +0000 (0:00:02.385) 0:00:15.504 ***** 2025-11-08 13:45:22.144685 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:45:22.144700 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:45:22.144737 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:45:22.144752 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:45:22.144767 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:45:22.144781 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:45:22.144794 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:45:22.144936 | orchestrator | 2025-11-08 13:45:22.144949 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-11-08 13:45:22.144961 | orchestrator | Saturday 08 November 2025 13:43:20 +0000 (0:00:01.324) 0:00:16.828 ***** 2025-11-08 13:45:22.144972 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:45:22.144983 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:45:22.144992 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:45:22.145002 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:45:22.145011 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:45:22.145020 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:45:22.145040 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:45:22.145049 | orchestrator | 2025-11-08 13:45:22.145059 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-11-08 13:45:22.145068 | orchestrator | Saturday 08 November 2025 13:43:21 +0000 (0:00:00.940) 0:00:17.768 ***** 2025-11-08 13:45:22.145078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.145090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.145113 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.145131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.145147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.145164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.145182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.145218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145278 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145353 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145535 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.145625 | orchestrator | 2025-11-08 13:45:22.145636 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-11-08 13:45:22.145646 | orchestrator | Saturday 08 November 2025 13:43:27 +0000 (0:00:06.661) 0:00:24.429 ***** 2025-11-08 13:45:22.145656 | orchestrator | [WARNING]: Skipped 2025-11-08 13:45:22.145666 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-11-08 13:45:22.145676 | orchestrator | to this access issue: 2025-11-08 13:45:22.145695 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-11-08 13:45:22.145737 | orchestrator | directory 2025-11-08 13:45:22.145789 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 13:45:22.145807 | orchestrator | 2025-11-08 13:45:22.145823 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-11-08 13:45:22.145836 | orchestrator | Saturday 08 November 2025 13:43:29 +0000 (0:00:01.958) 0:00:26.388 ***** 2025-11-08 13:45:22.145846 | orchestrator | [WARNING]: Skipped 2025-11-08 13:45:22.145855 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-11-08 13:45:22.145865 | orchestrator | to this access issue: 2025-11-08 13:45:22.145875 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-11-08 13:45:22.145884 | orchestrator | directory 2025-11-08 13:45:22.145894 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 13:45:22.145904 | orchestrator | 2025-11-08 13:45:22.145913 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-11-08 13:45:22.145923 | orchestrator | Saturday 08 November 2025 13:43:30 +0000 (0:00:00.948) 0:00:27.337 ***** 2025-11-08 13:45:22.145932 | orchestrator | [WARNING]: Skipped 2025-11-08 13:45:22.145942 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-11-08 13:45:22.145951 | orchestrator | to this access issue: 2025-11-08 13:45:22.145961 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-11-08 13:45:22.145970 | orchestrator | directory 2025-11-08 13:45:22.145980 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 13:45:22.145990 | orchestrator | 2025-11-08 13:45:22.145999 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-11-08 13:45:22.146015 | orchestrator | Saturday 08 November 2025 13:43:31 +0000 (0:00:01.237) 0:00:28.575 ***** 2025-11-08 13:45:22.146074 | orchestrator | [WARNING]: Skipped 2025-11-08 13:45:22.146093 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-11-08 13:45:22.146109 | orchestrator | to this access issue: 2025-11-08 13:45:22.146126 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-11-08 13:45:22.146136 | orchestrator | directory 2025-11-08 13:45:22.146145 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 13:45:22.146155 | orchestrator | 2025-11-08 13:45:22.146169 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-11-08 13:45:22.146184 | orchestrator | Saturday 08 November 2025 13:43:33 +0000 (0:00:01.351) 0:00:29.927 ***** 2025-11-08 13:45:22.146200 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:22.146216 | orchestrator | changed: [testbed-manager] 2025-11-08 13:45:22.146232 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:22.146248 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:22.146263 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:45:22.146274 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:45:22.146285 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:45:22.146295 | orchestrator | 2025-11-08 13:45:22.146306 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-11-08 13:45:22.146317 | orchestrator | Saturday 08 November 2025 13:43:37 +0000 (0:00:04.546) 0:00:34.474 ***** 2025-11-08 13:45:22.146327 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-08 13:45:22.146339 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-08 13:45:22.146350 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-08 13:45:22.146372 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-08 13:45:22.146383 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-08 13:45:22.146403 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-08 13:45:22.146413 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-08 13:45:22.146422 | orchestrator | 2025-11-08 13:45:22.146432 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-11-08 13:45:22.146441 | orchestrator | Saturday 08 November 2025 13:43:40 +0000 (0:00:03.049) 0:00:37.523 ***** 2025-11-08 13:45:22.146451 | orchestrator | changed: [testbed-manager] 2025-11-08 13:45:22.146460 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:22.146469 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:22.146479 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:22.146488 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:45:22.146497 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:45:22.146507 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:45:22.146516 | orchestrator | 2025-11-08 13:45:22.146526 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-11-08 13:45:22.146535 | orchestrator | Saturday 08 November 2025 13:43:43 +0000 (0:00:03.175) 0:00:40.698 ***** 2025-11-08 13:45:22.146546 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.146562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.146573 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.146583 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.146593 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.146615 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.146632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.146642 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.146652 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.146667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.146677 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.146687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.146697 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.146749 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.146760 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.146771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.146781 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.146795 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.146806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:45:22.146816 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.146826 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.146843 | orchestrator | 2025-11-08 13:45:22.146853 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-11-08 13:45:22.146863 | orchestrator | Saturday 08 November 2025 13:43:46 +0000 (0:00:02.700) 0:00:43.399 ***** 2025-11-08 13:45:22.146873 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-08 13:45:22.146882 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-08 13:45:22.146892 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-08 13:45:22.146910 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-08 13:45:22.146921 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-08 13:45:22.146931 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-08 13:45:22.146941 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-08 13:45:22.146951 | orchestrator | 2025-11-08 13:45:22.146960 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-11-08 13:45:22.146970 | orchestrator | Saturday 08 November 2025 13:43:50 +0000 (0:00:03.533) 0:00:46.933 ***** 2025-11-08 13:45:22.146980 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-08 13:45:22.146989 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-08 13:45:22.146999 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-08 13:45:22.147009 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-08 13:45:22.147018 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-08 13:45:22.147028 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-08 13:45:22.147037 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-08 13:45:22.147047 | orchestrator | 2025-11-08 13:45:22.147056 | orchestrator | TASK [common : Check common containers] **************************************** 2025-11-08 13:45:22.147066 | orchestrator | Saturday 08 November 2025 13:43:52 +0000 (0:00:02.159) 0:00:49.093 ***** 2025-11-08 13:45:22.147081 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.147092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.147102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.147118 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.147154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.147179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.147189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147205 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-08 13:45:22.147253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:45:22.147334 | orchestrator | 2025-11-08 13:45:22.147349 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-11-08 13:45:22.147359 | orchestrator | Saturday 08 November 2025 13:43:56 +0000 (0:00:04.148) 0:00:53.241 ***** 2025-11-08 13:45:22.147369 | orchestrator | changed: [testbed-manager] 2025-11-08 13:45:22.147379 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:22.147389 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:22.147398 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:22.147408 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:45:22.147417 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:45:22.147427 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:45:22.147436 | orchestrator | 2025-11-08 13:45:22.147446 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-11-08 13:45:22.147456 | orchestrator | Saturday 08 November 2025 13:43:58 +0000 (0:00:01.872) 0:00:55.114 ***** 2025-11-08 13:45:22.147465 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:22.147475 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:22.147484 | orchestrator | changed: [testbed-manager] 2025-11-08 13:45:22.147494 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:45:22.147503 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:22.147512 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:45:22.147522 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:45:22.147531 | orchestrator | 2025-11-08 13:45:22.147541 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-08 13:45:22.147550 | orchestrator | Saturday 08 November 2025 13:43:59 +0000 (0:00:01.540) 0:00:56.654 ***** 2025-11-08 13:45:22.147560 | orchestrator | 2025-11-08 13:45:22.147569 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-08 13:45:22.147579 | orchestrator | Saturday 08 November 2025 13:44:00 +0000 (0:00:00.114) 0:00:56.768 ***** 2025-11-08 13:45:22.147589 | orchestrator | 2025-11-08 13:45:22.147598 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-08 13:45:22.147614 | orchestrator | Saturday 08 November 2025 13:44:00 +0000 (0:00:00.099) 0:00:56.868 ***** 2025-11-08 13:45:22.147624 | orchestrator | 2025-11-08 13:45:22.147633 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-08 13:45:22.147643 | orchestrator | Saturday 08 November 2025 13:44:00 +0000 (0:00:00.171) 0:00:57.040 ***** 2025-11-08 13:45:22.147652 | orchestrator | 2025-11-08 13:45:22.147662 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-08 13:45:22.147671 | orchestrator | Saturday 08 November 2025 13:44:00 +0000 (0:00:00.066) 0:00:57.106 ***** 2025-11-08 13:45:22.147681 | orchestrator | 2025-11-08 13:45:22.147690 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-08 13:45:22.147700 | orchestrator | Saturday 08 November 2025 13:44:00 +0000 (0:00:00.065) 0:00:57.172 ***** 2025-11-08 13:45:22.147740 | orchestrator | 2025-11-08 13:45:22.147751 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-08 13:45:22.147761 | orchestrator | Saturday 08 November 2025 13:44:00 +0000 (0:00:00.079) 0:00:57.252 ***** 2025-11-08 13:45:22.147771 | orchestrator | 2025-11-08 13:45:22.147781 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-11-08 13:45:22.147790 | orchestrator | Saturday 08 November 2025 13:44:00 +0000 (0:00:00.093) 0:00:57.345 ***** 2025-11-08 13:45:22.147800 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:22.147810 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:22.147819 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:45:22.147829 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:22.147838 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:45:22.147848 | orchestrator | changed: [testbed-manager] 2025-11-08 13:45:22.147857 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:45:22.147867 | orchestrator | 2025-11-08 13:45:22.147877 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-11-08 13:45:22.147886 | orchestrator | Saturday 08 November 2025 13:44:38 +0000 (0:00:37.883) 0:01:35.229 ***** 2025-11-08 13:45:22.147896 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:22.147905 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:45:22.147915 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:22.147930 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:45:22.147940 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:45:22.147950 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:22.147959 | orchestrator | changed: [testbed-manager] 2025-11-08 13:45:22.147969 | orchestrator | 2025-11-08 13:45:22.147978 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-11-08 13:45:22.147988 | orchestrator | Saturday 08 November 2025 13:45:08 +0000 (0:00:30.011) 0:02:05.240 ***** 2025-11-08 13:45:22.147998 | orchestrator | ok: [testbed-manager] 2025-11-08 13:45:22.148007 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:45:22.148017 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:45:22.148026 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:45:22.148036 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:45:22.148045 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:45:22.148055 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:45:22.148064 | orchestrator | 2025-11-08 13:45:22.148074 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-11-08 13:45:22.148084 | orchestrator | Saturday 08 November 2025 13:45:10 +0000 (0:00:02.374) 0:02:07.615 ***** 2025-11-08 13:45:22.148093 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:22.148103 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:22.148112 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:45:22.148122 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:45:22.148131 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:45:22.148141 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:22.148150 | orchestrator | changed: [testbed-manager] 2025-11-08 13:45:22.148160 | orchestrator | 2025-11-08 13:45:22.148169 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:45:22.148180 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-08 13:45:22.148196 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-08 13:45:22.148212 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-08 13:45:22.148222 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-08 13:45:22.148232 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-08 13:45:22.148242 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-08 13:45:22.148252 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-08 13:45:22.148261 | orchestrator | 2025-11-08 13:45:22.148271 | orchestrator | 2025-11-08 13:45:22.148281 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:45:22.148291 | orchestrator | Saturday 08 November 2025 13:45:19 +0000 (0:00:08.503) 0:02:16.119 ***** 2025-11-08 13:45:22.148300 | orchestrator | =============================================================================== 2025-11-08 13:45:22.148310 | orchestrator | common : Restart fluentd container ------------------------------------- 37.88s 2025-11-08 13:45:22.148319 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 30.01s 2025-11-08 13:45:22.148329 | orchestrator | common : Restart cron container ----------------------------------------- 8.50s 2025-11-08 13:45:22.148339 | orchestrator | common : Copying over config.json files for services -------------------- 6.66s 2025-11-08 13:45:22.148348 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.55s 2025-11-08 13:45:22.148358 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.53s 2025-11-08 13:45:22.148367 | orchestrator | common : Check common containers ---------------------------------------- 4.15s 2025-11-08 13:45:22.148377 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.81s 2025-11-08 13:45:22.148387 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.53s 2025-11-08 13:45:22.148400 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.18s 2025-11-08 13:45:22.148410 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.05s 2025-11-08 13:45:22.148420 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.70s 2025-11-08 13:45:22.148429 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.39s 2025-11-08 13:45:22.148439 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.37s 2025-11-08 13:45:22.148448 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.33s 2025-11-08 13:45:22.148458 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.16s 2025-11-08 13:45:22.148468 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.96s 2025-11-08 13:45:22.148477 | orchestrator | common : Creating log volume -------------------------------------------- 1.87s 2025-11-08 13:45:22.148487 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.54s 2025-11-08 13:45:22.148497 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.35s 2025-11-08 13:45:22.148507 | orchestrator | 2025-11-08 13:45:22 | INFO  | Task d0b7780b-0c44-4c43-9212-b38ccc66ea21 is in state STARTED 2025-11-08 13:45:22.148517 | orchestrator | 2025-11-08 13:45:22 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:22.148535 | orchestrator | 2025-11-08 13:45:22 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:22.148546 | orchestrator | 2025-11-08 13:45:22 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:22.148555 | orchestrator | 2025-11-08 13:45:22 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:22.148566 | orchestrator | 2025-11-08 13:45:22 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:22.148576 | orchestrator | 2025-11-08 13:45:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:25.175585 | orchestrator | 2025-11-08 13:45:25 | INFO  | Task d0b7780b-0c44-4c43-9212-b38ccc66ea21 is in state STARTED 2025-11-08 13:45:25.175839 | orchestrator | 2025-11-08 13:45:25 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:25.176646 | orchestrator | 2025-11-08 13:45:25 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:25.177272 | orchestrator | 2025-11-08 13:45:25 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:25.178151 | orchestrator | 2025-11-08 13:45:25 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:25.178972 | orchestrator | 2025-11-08 13:45:25 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:25.179964 | orchestrator | 2025-11-08 13:45:25 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:28.200087 | orchestrator | 2025-11-08 13:45:28 | INFO  | Task d0b7780b-0c44-4c43-9212-b38ccc66ea21 is in state STARTED 2025-11-08 13:45:28.201396 | orchestrator | 2025-11-08 13:45:28 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:28.203266 | orchestrator | 2025-11-08 13:45:28 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:28.204920 | orchestrator | 2025-11-08 13:45:28 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:28.206527 | orchestrator | 2025-11-08 13:45:28 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:28.208401 | orchestrator | 2025-11-08 13:45:28 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:28.208438 | orchestrator | 2025-11-08 13:45:28 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:31.233288 | orchestrator | 2025-11-08 13:45:31 | INFO  | Task d0b7780b-0c44-4c43-9212-b38ccc66ea21 is in state STARTED 2025-11-08 13:45:31.233417 | orchestrator | 2025-11-08 13:45:31 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:31.233440 | orchestrator | 2025-11-08 13:45:31 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:31.233593 | orchestrator | 2025-11-08 13:45:31 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:31.233631 | orchestrator | 2025-11-08 13:45:31 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:31.234284 | orchestrator | 2025-11-08 13:45:31 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:31.234341 | orchestrator | 2025-11-08 13:45:31 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:34.272472 | orchestrator | 2025-11-08 13:45:34 | INFO  | Task d0b7780b-0c44-4c43-9212-b38ccc66ea21 is in state STARTED 2025-11-08 13:45:34.273303 | orchestrator | 2025-11-08 13:45:34 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:34.274188 | orchestrator | 2025-11-08 13:45:34 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:34.275271 | orchestrator | 2025-11-08 13:45:34 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:34.275903 | orchestrator | 2025-11-08 13:45:34 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:34.276904 | orchestrator | 2025-11-08 13:45:34 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:34.276933 | orchestrator | 2025-11-08 13:45:34 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:37.329406 | orchestrator | 2025-11-08 13:45:37 | INFO  | Task d0b7780b-0c44-4c43-9212-b38ccc66ea21 is in state STARTED 2025-11-08 13:45:37.331687 | orchestrator | 2025-11-08 13:45:37 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:37.334119 | orchestrator | 2025-11-08 13:45:37 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:37.335267 | orchestrator | 2025-11-08 13:45:37 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:37.338376 | orchestrator | 2025-11-08 13:45:37 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:37.341402 | orchestrator | 2025-11-08 13:45:37 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:37.341461 | orchestrator | 2025-11-08 13:45:37 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:40.458209 | orchestrator | 2025-11-08 13:45:40 | INFO  | Task d0b7780b-0c44-4c43-9212-b38ccc66ea21 is in state SUCCESS 2025-11-08 13:45:40.462216 | orchestrator | 2025-11-08 13:45:40 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:40.464228 | orchestrator | 2025-11-08 13:45:40 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:40.465375 | orchestrator | 2025-11-08 13:45:40 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:40.467101 | orchestrator | 2025-11-08 13:45:40 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:40.467771 | orchestrator | 2025-11-08 13:45:40 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:40.468533 | orchestrator | 2025-11-08 13:45:40 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:45:40.468972 | orchestrator | 2025-11-08 13:45:40 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:43.535594 | orchestrator | 2025-11-08 13:45:43 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:43.536864 | orchestrator | 2025-11-08 13:45:43 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:43.538687 | orchestrator | 2025-11-08 13:45:43 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:43.539312 | orchestrator | 2025-11-08 13:45:43 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:43.541415 | orchestrator | 2025-11-08 13:45:43 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:43.541999 | orchestrator | 2025-11-08 13:45:43 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:45:43.542050 | orchestrator | 2025-11-08 13:45:43 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:46.583941 | orchestrator | 2025-11-08 13:45:46 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:46.584326 | orchestrator | 2025-11-08 13:45:46 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:46.584988 | orchestrator | 2025-11-08 13:45:46 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:46.586446 | orchestrator | 2025-11-08 13:45:46 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:46.587180 | orchestrator | 2025-11-08 13:45:46 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:46.587870 | orchestrator | 2025-11-08 13:45:46 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:45:46.587888 | orchestrator | 2025-11-08 13:45:46 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:49.639463 | orchestrator | 2025-11-08 13:45:49 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:49.641499 | orchestrator | 2025-11-08 13:45:49 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:49.641920 | orchestrator | 2025-11-08 13:45:49 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:49.644280 | orchestrator | 2025-11-08 13:45:49 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:49.645595 | orchestrator | 2025-11-08 13:45:49 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:49.646837 | orchestrator | 2025-11-08 13:45:49 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:45:49.646930 | orchestrator | 2025-11-08 13:45:49 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:52.671371 | orchestrator | 2025-11-08 13:45:52 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:52.671667 | orchestrator | 2025-11-08 13:45:52 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:52.672564 | orchestrator | 2025-11-08 13:45:52 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state STARTED 2025-11-08 13:45:52.673542 | orchestrator | 2025-11-08 13:45:52 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:52.675410 | orchestrator | 2025-11-08 13:45:52 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:52.678805 | orchestrator | 2025-11-08 13:45:52 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:45:52.679298 | orchestrator | 2025-11-08 13:45:52 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:55.740544 | orchestrator | 2025-11-08 13:45:55 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:55.742343 | orchestrator | 2025-11-08 13:45:55 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:55.743462 | orchestrator | 2025-11-08 13:45:55 | INFO  | Task 578512f3-df43-47b7-90c8-1ee4a018e7d0 is in state SUCCESS 2025-11-08 13:45:55.746368 | orchestrator | 2025-11-08 13:45:55.746405 | orchestrator | 2025-11-08 13:45:55.746414 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:45:55.746424 | orchestrator | 2025-11-08 13:45:55.746431 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:45:55.746439 | orchestrator | Saturday 08 November 2025 13:45:24 +0000 (0:00:00.283) 0:00:00.283 ***** 2025-11-08 13:45:55.746446 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:45:55.746455 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:45:55.746462 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:45:55.746469 | orchestrator | 2025-11-08 13:45:55.746477 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:45:55.746509 | orchestrator | Saturday 08 November 2025 13:45:25 +0000 (0:00:00.355) 0:00:00.639 ***** 2025-11-08 13:45:55.746518 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-11-08 13:45:55.746527 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-11-08 13:45:55.746534 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-11-08 13:45:55.746542 | orchestrator | 2025-11-08 13:45:55.746549 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-11-08 13:45:55.746557 | orchestrator | 2025-11-08 13:45:55.746564 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-11-08 13:45:55.746572 | orchestrator | Saturday 08 November 2025 13:45:25 +0000 (0:00:00.454) 0:00:01.093 ***** 2025-11-08 13:45:55.746580 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:45:55.746588 | orchestrator | 2025-11-08 13:45:55.746595 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-11-08 13:45:55.746603 | orchestrator | Saturday 08 November 2025 13:45:26 +0000 (0:00:00.583) 0:00:01.677 ***** 2025-11-08 13:45:55.746610 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-11-08 13:45:55.746618 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-11-08 13:45:55.746625 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-11-08 13:45:55.746634 | orchestrator | 2025-11-08 13:45:55.746638 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-11-08 13:45:55.746643 | orchestrator | Saturday 08 November 2025 13:45:26 +0000 (0:00:00.735) 0:00:02.412 ***** 2025-11-08 13:45:55.746647 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-11-08 13:45:55.746652 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-11-08 13:45:55.746657 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-11-08 13:45:55.746661 | orchestrator | 2025-11-08 13:45:55.746666 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-11-08 13:45:55.746670 | orchestrator | Saturday 08 November 2025 13:45:28 +0000 (0:00:01.799) 0:00:04.212 ***** 2025-11-08 13:45:55.746675 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:55.746680 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:55.746696 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:55.746701 | orchestrator | 2025-11-08 13:45:55.746705 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-11-08 13:45:55.746710 | orchestrator | Saturday 08 November 2025 13:45:30 +0000 (0:00:01.526) 0:00:05.739 ***** 2025-11-08 13:45:55.746738 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:55.746745 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:55.746750 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:55.746754 | orchestrator | 2025-11-08 13:45:55.746759 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:45:55.746764 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:45:55.746770 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:45:55.746776 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:45:55.746780 | orchestrator | 2025-11-08 13:45:55.746785 | orchestrator | 2025-11-08 13:45:55.746789 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:45:55.746794 | orchestrator | Saturday 08 November 2025 13:45:37 +0000 (0:00:07.762) 0:00:13.501 ***** 2025-11-08 13:45:55.746798 | orchestrator | =============================================================================== 2025-11-08 13:45:55.746803 | orchestrator | memcached : Restart memcached container --------------------------------- 7.76s 2025-11-08 13:45:55.746807 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.80s 2025-11-08 13:45:55.746817 | orchestrator | memcached : Check memcached container ----------------------------------- 1.53s 2025-11-08 13:45:55.746822 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.74s 2025-11-08 13:45:55.746826 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.58s 2025-11-08 13:45:55.746831 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-11-08 13:45:55.746835 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-11-08 13:45:55.746840 | orchestrator | 2025-11-08 13:45:55.746844 | orchestrator | 2025-11-08 13:45:55.746849 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:45:55.746853 | orchestrator | 2025-11-08 13:45:55.746858 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:45:55.746862 | orchestrator | Saturday 08 November 2025 13:45:25 +0000 (0:00:00.393) 0:00:00.393 ***** 2025-11-08 13:45:55.746867 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:45:55.746871 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:45:55.746876 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:45:55.746880 | orchestrator | 2025-11-08 13:45:55.746885 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:45:55.746899 | orchestrator | Saturday 08 November 2025 13:45:25 +0000 (0:00:00.321) 0:00:00.714 ***** 2025-11-08 13:45:55.746904 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-11-08 13:45:55.746909 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-11-08 13:45:55.746921 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-11-08 13:45:55.746926 | orchestrator | 2025-11-08 13:45:55.746930 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-11-08 13:45:55.746935 | orchestrator | 2025-11-08 13:45:55.746939 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-11-08 13:45:55.746944 | orchestrator | Saturday 08 November 2025 13:45:26 +0000 (0:00:00.473) 0:00:01.188 ***** 2025-11-08 13:45:55.746949 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:45:55.746954 | orchestrator | 2025-11-08 13:45:55.746958 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-11-08 13:45:55.746963 | orchestrator | Saturday 08 November 2025 13:45:26 +0000 (0:00:00.517) 0:00:01.705 ***** 2025-11-08 13:45:55.746972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.746983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.746999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747031 | orchestrator | 2025-11-08 13:45:55.747037 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-11-08 13:45:55.747042 | orchestrator | Saturday 08 November 2025 13:45:27 +0000 (0:00:01.035) 0:00:02.741 ***** 2025-11-08 13:45:55.747048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747090 | orchestrator | 2025-11-08 13:45:55.747095 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-11-08 13:45:55.747100 | orchestrator | Saturday 08 November 2025 13:45:30 +0000 (0:00:02.533) 0:00:05.274 ***** 2025-11-08 13:45:55.747106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747144 | orchestrator | 2025-11-08 13:45:55.747250 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-11-08 13:45:55.747264 | orchestrator | Saturday 08 November 2025 13:45:32 +0000 (0:00:02.689) 0:00:07.964 ***** 2025-11-08 13:45:55.747272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-08 13:45:55.747334 | orchestrator | 2025-11-08 13:45:55.747342 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-08 13:45:55.747350 | orchestrator | Saturday 08 November 2025 13:45:34 +0000 (0:00:02.069) 0:00:10.033 ***** 2025-11-08 13:45:55.747358 | orchestrator | 2025-11-08 13:45:55.747365 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-08 13:45:55.747376 | orchestrator | Saturday 08 November 2025 13:45:34 +0000 (0:00:00.106) 0:00:10.139 ***** 2025-11-08 13:45:55.747381 | orchestrator | 2025-11-08 13:45:55.747387 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-08 13:45:55.747395 | orchestrator | Saturday 08 November 2025 13:45:35 +0000 (0:00:00.134) 0:00:10.273 ***** 2025-11-08 13:45:55.747402 | orchestrator | 2025-11-08 13:45:55.747409 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-11-08 13:45:55.747417 | orchestrator | Saturday 08 November 2025 13:45:35 +0000 (0:00:00.186) 0:00:10.460 ***** 2025-11-08 13:45:55.747425 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:55.747432 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:55.747440 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:55.747448 | orchestrator | 2025-11-08 13:45:55.747456 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-11-08 13:45:55.747462 | orchestrator | Saturday 08 November 2025 13:45:44 +0000 (0:00:09.513) 0:00:19.974 ***** 2025-11-08 13:45:55.747467 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:45:55.747471 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:45:55.747476 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:45:55.747486 | orchestrator | 2025-11-08 13:45:55.747491 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:45:55.747496 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:45:55.747501 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:45:55.747506 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:45:55.747510 | orchestrator | 2025-11-08 13:45:55.747515 | orchestrator | 2025-11-08 13:45:55.747523 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:45:55.747530 | orchestrator | Saturday 08 November 2025 13:45:54 +0000 (0:00:09.952) 0:00:29.926 ***** 2025-11-08 13:45:55.747537 | orchestrator | =============================================================================== 2025-11-08 13:45:55.747545 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.95s 2025-11-08 13:45:55.747552 | orchestrator | redis : Restart redis container ----------------------------------------- 9.51s 2025-11-08 13:45:55.747559 | orchestrator | redis : Copying over redis config files --------------------------------- 2.69s 2025-11-08 13:45:55.747566 | orchestrator | redis : Copying over default config.json files -------------------------- 2.53s 2025-11-08 13:45:55.747573 | orchestrator | redis : Check redis containers ------------------------------------------ 2.07s 2025-11-08 13:45:55.747580 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.04s 2025-11-08 13:45:55.747587 | orchestrator | redis : include_tasks --------------------------------------------------- 0.52s 2025-11-08 13:45:55.747593 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-11-08 13:45:55.747600 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.43s 2025-11-08 13:45:55.747607 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-11-08 13:45:55.747613 | orchestrator | 2025-11-08 13:45:55 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:55.748064 | orchestrator | 2025-11-08 13:45:55 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:55.749038 | orchestrator | 2025-11-08 13:45:55 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:45:55.750005 | orchestrator | 2025-11-08 13:45:55 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:45:58.814311 | orchestrator | 2025-11-08 13:45:58 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:45:58.815502 | orchestrator | 2025-11-08 13:45:58 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:45:58.816117 | orchestrator | 2025-11-08 13:45:58 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:45:58.816904 | orchestrator | 2025-11-08 13:45:58 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:45:58.818550 | orchestrator | 2025-11-08 13:45:58 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:45:58.818640 | orchestrator | 2025-11-08 13:45:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:01.859840 | orchestrator | 2025-11-08 13:46:01 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:01.860064 | orchestrator | 2025-11-08 13:46:01 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:01.860094 | orchestrator | 2025-11-08 13:46:01 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:01.860707 | orchestrator | 2025-11-08 13:46:01 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:46:01.865315 | orchestrator | 2025-11-08 13:46:01 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:01.865403 | orchestrator | 2025-11-08 13:46:01 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:04.926351 | orchestrator | 2025-11-08 13:46:04 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:04.926475 | orchestrator | 2025-11-08 13:46:04 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:04.926494 | orchestrator | 2025-11-08 13:46:04 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:04.927141 | orchestrator | 2025-11-08 13:46:04 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:46:04.928994 | orchestrator | 2025-11-08 13:46:04 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:04.929067 | orchestrator | 2025-11-08 13:46:04 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:07.966191 | orchestrator | 2025-11-08 13:46:07 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:07.969134 | orchestrator | 2025-11-08 13:46:07 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:07.971274 | orchestrator | 2025-11-08 13:46:07 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:07.973066 | orchestrator | 2025-11-08 13:46:07 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:46:07.975518 | orchestrator | 2025-11-08 13:46:07 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:07.975560 | orchestrator | 2025-11-08 13:46:07 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:11.053215 | orchestrator | 2025-11-08 13:46:11 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:11.053325 | orchestrator | 2025-11-08 13:46:11 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:11.053340 | orchestrator | 2025-11-08 13:46:11 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:11.053352 | orchestrator | 2025-11-08 13:46:11 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:46:11.053382 | orchestrator | 2025-11-08 13:46:11 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:11.053394 | orchestrator | 2025-11-08 13:46:11 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:14.260945 | orchestrator | 2025-11-08 13:46:14 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:14.261077 | orchestrator | 2025-11-08 13:46:14 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:14.261092 | orchestrator | 2025-11-08 13:46:14 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:14.261105 | orchestrator | 2025-11-08 13:46:14 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:46:14.261116 | orchestrator | 2025-11-08 13:46:14 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:14.261127 | orchestrator | 2025-11-08 13:46:14 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:17.320165 | orchestrator | 2025-11-08 13:46:17 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:17.320291 | orchestrator | 2025-11-08 13:46:17 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:17.322368 | orchestrator | 2025-11-08 13:46:17 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:17.322954 | orchestrator | 2025-11-08 13:46:17 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:46:17.324000 | orchestrator | 2025-11-08 13:46:17 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:17.324028 | orchestrator | 2025-11-08 13:46:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:20.453661 | orchestrator | 2025-11-08 13:46:20 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:20.457682 | orchestrator | 2025-11-08 13:46:20 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:20.458161 | orchestrator | 2025-11-08 13:46:20 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:20.462958 | orchestrator | 2025-11-08 13:46:20 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:46:20.463002 | orchestrator | 2025-11-08 13:46:20 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:20.463007 | orchestrator | 2025-11-08 13:46:20 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:23.530703 | orchestrator | 2025-11-08 13:46:23 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:23.531001 | orchestrator | 2025-11-08 13:46:23 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:23.532913 | orchestrator | 2025-11-08 13:46:23 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:23.532943 | orchestrator | 2025-11-08 13:46:23 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:46:23.532952 | orchestrator | 2025-11-08 13:46:23 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:23.532957 | orchestrator | 2025-11-08 13:46:23 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:26.622857 | orchestrator | 2025-11-08 13:46:26 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:26.627208 | orchestrator | 2025-11-08 13:46:26 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:26.628679 | orchestrator | 2025-11-08 13:46:26 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:26.630943 | orchestrator | 2025-11-08 13:46:26 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state STARTED 2025-11-08 13:46:26.632798 | orchestrator | 2025-11-08 13:46:26 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:26.633195 | orchestrator | 2025-11-08 13:46:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:29.657394 | orchestrator | 2025-11-08 13:46:29 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:46:29.657919 | orchestrator | 2025-11-08 13:46:29 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:29.658982 | orchestrator | 2025-11-08 13:46:29 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:29.659645 | orchestrator | 2025-11-08 13:46:29 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:29.660857 | orchestrator | 2025-11-08 13:46:29 | INFO  | Task 1f7876b4-67de-4b91-a5c2-1ac056351072 is in state SUCCESS 2025-11-08 13:46:29.663122 | orchestrator | 2025-11-08 13:46:29.663168 | orchestrator | 2025-11-08 13:46:29.663177 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:46:29.663205 | orchestrator | 2025-11-08 13:46:29.663212 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:46:29.663219 | orchestrator | Saturday 08 November 2025 13:45:24 +0000 (0:00:00.294) 0:00:00.294 ***** 2025-11-08 13:46:29.663226 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:46:29.663233 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:46:29.663240 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:46:29.663247 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:46:29.663253 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:46:29.663259 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:46:29.663265 | orchestrator | 2025-11-08 13:46:29.663271 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:46:29.663278 | orchestrator | Saturday 08 November 2025 13:45:25 +0000 (0:00:00.941) 0:00:01.235 ***** 2025-11-08 13:46:29.663284 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-08 13:46:29.663290 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-08 13:46:29.663296 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-08 13:46:29.663303 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-08 13:46:29.663309 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-08 13:46:29.663316 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-08 13:46:29.663322 | orchestrator | 2025-11-08 13:46:29.663328 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-11-08 13:46:29.663334 | orchestrator | 2025-11-08 13:46:29.663340 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-11-08 13:46:29.663346 | orchestrator | Saturday 08 November 2025 13:45:26 +0000 (0:00:00.647) 0:00:01.883 ***** 2025-11-08 13:46:29.663354 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:46:29.663363 | orchestrator | 2025-11-08 13:46:29.663370 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-08 13:46:29.663377 | orchestrator | Saturday 08 November 2025 13:45:27 +0000 (0:00:01.192) 0:00:03.076 ***** 2025-11-08 13:46:29.663384 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-11-08 13:46:29.663390 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-11-08 13:46:29.663396 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-11-08 13:46:29.663402 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-11-08 13:46:29.663408 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-11-08 13:46:29.663414 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-11-08 13:46:29.663421 | orchestrator | 2025-11-08 13:46:29.663427 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-08 13:46:29.663434 | orchestrator | Saturday 08 November 2025 13:45:28 +0000 (0:00:01.332) 0:00:04.408 ***** 2025-11-08 13:46:29.663441 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-11-08 13:46:29.663447 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-11-08 13:46:29.663453 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-11-08 13:46:29.663459 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-11-08 13:46:29.663465 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-11-08 13:46:29.663471 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-11-08 13:46:29.663477 | orchestrator | 2025-11-08 13:46:29.663483 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-08 13:46:29.663489 | orchestrator | Saturday 08 November 2025 13:45:30 +0000 (0:00:01.585) 0:00:05.994 ***** 2025-11-08 13:46:29.663495 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-11-08 13:46:29.663511 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:46:29.663519 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-11-08 13:46:29.663525 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:46:29.663531 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-11-08 13:46:29.663538 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:46:29.663545 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-11-08 13:46:29.663552 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:46:29.663558 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-11-08 13:46:29.663564 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:46:29.663569 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-11-08 13:46:29.663576 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:46:29.663582 | orchestrator | 2025-11-08 13:46:29.663589 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-11-08 13:46:29.663595 | orchestrator | Saturday 08 November 2025 13:45:31 +0000 (0:00:01.137) 0:00:07.131 ***** 2025-11-08 13:46:29.663601 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:46:29.663607 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:46:29.663613 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:46:29.663619 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:46:29.663626 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:46:29.663631 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:46:29.663638 | orchestrator | 2025-11-08 13:46:29.663644 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-11-08 13:46:29.663651 | orchestrator | Saturday 08 November 2025 13:45:32 +0000 (0:00:00.756) 0:00:07.888 ***** 2025-11-08 13:46:29.663681 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663709 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663753 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663823 | orchestrator | 2025-11-08 13:46:29.663830 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-11-08 13:46:29.663837 | orchestrator | Saturday 08 November 2025 13:45:34 +0000 (0:00:02.077) 0:00:09.965 ***** 2025-11-08 13:46:29.663843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.663969 | orchestrator | 2025-11-08 13:46:29.663975 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-11-08 13:46:29.663982 | orchestrator | Saturday 08 November 2025 13:45:38 +0000 (0:00:04.553) 0:00:14.519 ***** 2025-11-08 13:46:29.663989 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:46:29.663996 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:46:29.664002 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:46:29.664009 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:46:29.664017 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:46:29.664024 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:46:29.664032 | orchestrator | 2025-11-08 13:46:29.664039 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-11-08 13:46:29.664046 | orchestrator | Saturday 08 November 2025 13:45:41 +0000 (0:00:02.220) 0:00:16.739 ***** 2025-11-08 13:46:29.664060 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664068 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664106 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-08 13:46:29.664175 | orchestrator | 2025-11-08 13:46:29.664182 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-08 13:46:29.664188 | orchestrator | Saturday 08 November 2025 13:45:44 +0000 (0:00:03.420) 0:00:20.160 ***** 2025-11-08 13:46:29.664195 | orchestrator | 2025-11-08 13:46:29.664202 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-08 13:46:29.664208 | orchestrator | Saturday 08 November 2025 13:45:44 +0000 (0:00:00.337) 0:00:20.498 ***** 2025-11-08 13:46:29.664214 | orchestrator | 2025-11-08 13:46:29.664221 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-08 13:46:29.664227 | orchestrator | Saturday 08 November 2025 13:45:45 +0000 (0:00:00.331) 0:00:20.829 ***** 2025-11-08 13:46:29.664234 | orchestrator | 2025-11-08 13:46:29.664241 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-08 13:46:29.664248 | orchestrator | Saturday 08 November 2025 13:45:45 +0000 (0:00:00.587) 0:00:21.417 ***** 2025-11-08 13:46:29.664255 | orchestrator | 2025-11-08 13:46:29.664262 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-08 13:46:29.664269 | orchestrator | Saturday 08 November 2025 13:45:46 +0000 (0:00:00.613) 0:00:22.030 ***** 2025-11-08 13:46:29.664275 | orchestrator | 2025-11-08 13:46:29.664281 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-08 13:46:29.664288 | orchestrator | Saturday 08 November 2025 13:45:46 +0000 (0:00:00.399) 0:00:22.430 ***** 2025-11-08 13:46:29.664294 | orchestrator | 2025-11-08 13:46:29.664301 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-11-08 13:46:29.664307 | orchestrator | Saturday 08 November 2025 13:45:47 +0000 (0:00:00.788) 0:00:23.219 ***** 2025-11-08 13:46:29.664314 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:46:29.664320 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:46:29.664326 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:46:29.664332 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:46:29.664337 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:46:29.664343 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:46:29.664349 | orchestrator | 2025-11-08 13:46:29.664355 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-11-08 13:46:29.664361 | orchestrator | Saturday 08 November 2025 13:45:53 +0000 (0:00:06.179) 0:00:29.398 ***** 2025-11-08 13:46:29.664366 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:46:29.664372 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:46:29.664378 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:46:29.664384 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:46:29.664389 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:46:29.664395 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:46:29.664401 | orchestrator | 2025-11-08 13:46:29.664407 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-11-08 13:46:29.664413 | orchestrator | Saturday 08 November 2025 13:45:55 +0000 (0:00:01.726) 0:00:31.124 ***** 2025-11-08 13:46:29.664418 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:46:29.664424 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:46:29.664429 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:46:29.664435 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:46:29.664441 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:46:29.664447 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:46:29.664453 | orchestrator | 2025-11-08 13:46:29.664459 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-11-08 13:46:29.664473 | orchestrator | Saturday 08 November 2025 13:46:01 +0000 (0:00:05.689) 0:00:36.814 ***** 2025-11-08 13:46:29.664479 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-11-08 13:46:29.664485 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-11-08 13:46:29.664492 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-11-08 13:46:29.664502 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-11-08 13:46:29.664509 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-11-08 13:46:29.664523 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-11-08 13:46:29.664530 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-11-08 13:46:29.664537 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-11-08 13:46:29.664544 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-11-08 13:46:29.664551 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-11-08 13:46:29.664558 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-11-08 13:46:29.664564 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-11-08 13:46:29.664571 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-08 13:46:29.664577 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-08 13:46:29.664583 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-08 13:46:29.664590 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-08 13:46:29.664596 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-08 13:46:29.664603 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-08 13:46:29.664609 | orchestrator | 2025-11-08 13:46:29.664616 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-11-08 13:46:29.664622 | orchestrator | Saturday 08 November 2025 13:46:09 +0000 (0:00:08.230) 0:00:45.045 ***** 2025-11-08 13:46:29.664629 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-11-08 13:46:29.664636 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:46:29.664642 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-11-08 13:46:29.664649 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:46:29.664655 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-11-08 13:46:29.664662 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:46:29.664669 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-11-08 13:46:29.664676 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-11-08 13:46:29.664682 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-11-08 13:46:29.664689 | orchestrator | 2025-11-08 13:46:29.664695 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-11-08 13:46:29.664702 | orchestrator | Saturday 08 November 2025 13:46:13 +0000 (0:00:03.679) 0:00:48.725 ***** 2025-11-08 13:46:29.664709 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-11-08 13:46:29.664743 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:46:29.664751 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-11-08 13:46:29.664758 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:46:29.664765 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-11-08 13:46:29.664770 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:46:29.664776 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-11-08 13:46:29.664783 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-11-08 13:46:29.664788 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-11-08 13:46:29.664794 | orchestrator | 2025-11-08 13:46:29.664801 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-11-08 13:46:29.664807 | orchestrator | Saturday 08 November 2025 13:46:18 +0000 (0:00:05.191) 0:00:53.917 ***** 2025-11-08 13:46:29.664813 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:46:29.664820 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:46:29.664825 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:46:29.664829 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:46:29.664833 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:46:29.664837 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:46:29.664840 | orchestrator | 2025-11-08 13:46:29.664844 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:46:29.664849 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-08 13:46:29.664855 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-08 13:46:29.664858 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-08 13:46:29.664863 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 13:46:29.664867 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 13:46:29.664875 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 13:46:29.664879 | orchestrator | 2025-11-08 13:46:29.664883 | orchestrator | 2025-11-08 13:46:29.664887 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:46:29.664891 | orchestrator | Saturday 08 November 2025 13:46:28 +0000 (0:00:09.780) 0:01:03.697 ***** 2025-11-08 13:46:29.664895 | orchestrator | =============================================================================== 2025-11-08 13:46:29.664899 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.47s 2025-11-08 13:46:29.664903 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.23s 2025-11-08 13:46:29.664907 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.18s 2025-11-08 13:46:29.664910 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.19s 2025-11-08 13:46:29.664919 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.55s 2025-11-08 13:46:29.664923 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.68s 2025-11-08 13:46:29.664927 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.42s 2025-11-08 13:46:29.664931 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.06s 2025-11-08 13:46:29.664934 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.22s 2025-11-08 13:46:29.664938 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.08s 2025-11-08 13:46:29.664948 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.73s 2025-11-08 13:46:29.664952 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.59s 2025-11-08 13:46:29.664956 | orchestrator | module-load : Load modules ---------------------------------------------- 1.33s 2025-11-08 13:46:29.664960 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.19s 2025-11-08 13:46:29.664964 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.14s 2025-11-08 13:46:29.664967 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.94s 2025-11-08 13:46:29.664971 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.76s 2025-11-08 13:46:29.664975 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-11-08 13:46:29.664979 | orchestrator | 2025-11-08 13:46:29 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:29.664982 | orchestrator | 2025-11-08 13:46:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:32.721968 | orchestrator | 2025-11-08 13:46:32 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:46:32.725310 | orchestrator | 2025-11-08 13:46:32 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:32.725856 | orchestrator | 2025-11-08 13:46:32 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:32.726773 | orchestrator | 2025-11-08 13:46:32 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:32.727930 | orchestrator | 2025-11-08 13:46:32 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:32.727969 | orchestrator | 2025-11-08 13:46:32 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:35.770472 | orchestrator | 2025-11-08 13:46:35 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:46:35.770536 | orchestrator | 2025-11-08 13:46:35 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:35.770545 | orchestrator | 2025-11-08 13:46:35 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:35.770551 | orchestrator | 2025-11-08 13:46:35 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:35.771049 | orchestrator | 2025-11-08 13:46:35 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:35.771065 | orchestrator | 2025-11-08 13:46:35 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:38.811544 | orchestrator | 2025-11-08 13:46:38 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:46:38.811922 | orchestrator | 2025-11-08 13:46:38 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:38.813060 | orchestrator | 2025-11-08 13:46:38 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:38.813946 | orchestrator | 2025-11-08 13:46:38 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:38.814429 | orchestrator | 2025-11-08 13:46:38 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:38.814458 | orchestrator | 2025-11-08 13:46:38 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:41.869613 | orchestrator | 2025-11-08 13:46:41 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:46:41.870368 | orchestrator | 2025-11-08 13:46:41 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:41.873281 | orchestrator | 2025-11-08 13:46:41 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:41.877416 | orchestrator | 2025-11-08 13:46:41 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:41.879864 | orchestrator | 2025-11-08 13:46:41 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:41.879940 | orchestrator | 2025-11-08 13:46:41 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:44.921436 | orchestrator | 2025-11-08 13:46:44 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:46:44.921574 | orchestrator | 2025-11-08 13:46:44 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:44.922399 | orchestrator | 2025-11-08 13:46:44 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:44.923237 | orchestrator | 2025-11-08 13:46:44 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:44.925763 | orchestrator | 2025-11-08 13:46:44 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:44.925813 | orchestrator | 2025-11-08 13:46:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:47.958925 | orchestrator | 2025-11-08 13:46:47 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:46:47.959828 | orchestrator | 2025-11-08 13:46:47 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:47.961009 | orchestrator | 2025-11-08 13:46:47 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:47.964701 | orchestrator | 2025-11-08 13:46:47 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:47.964807 | orchestrator | 2025-11-08 13:46:47 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:47.964815 | orchestrator | 2025-11-08 13:46:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:51.019434 | orchestrator | 2025-11-08 13:46:51 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:46:51.020051 | orchestrator | 2025-11-08 13:46:51 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:51.020969 | orchestrator | 2025-11-08 13:46:51 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:51.025691 | orchestrator | 2025-11-08 13:46:51 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:51.026538 | orchestrator | 2025-11-08 13:46:51 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:51.026579 | orchestrator | 2025-11-08 13:46:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:54.069517 | orchestrator | 2025-11-08 13:46:54 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:46:54.072424 | orchestrator | 2025-11-08 13:46:54 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:54.074850 | orchestrator | 2025-11-08 13:46:54 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:54.076902 | orchestrator | 2025-11-08 13:46:54 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:54.079121 | orchestrator | 2025-11-08 13:46:54 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:54.079380 | orchestrator | 2025-11-08 13:46:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:46:57.123793 | orchestrator | 2025-11-08 13:46:57 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:46:57.126555 | orchestrator | 2025-11-08 13:46:57 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:46:57.129656 | orchestrator | 2025-11-08 13:46:57 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:46:57.130798 | orchestrator | 2025-11-08 13:46:57 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:46:57.132347 | orchestrator | 2025-11-08 13:46:57 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:46:57.132457 | orchestrator | 2025-11-08 13:46:57 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:00.176908 | orchestrator | 2025-11-08 13:47:00 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:00.177016 | orchestrator | 2025-11-08 13:47:00 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:47:00.179098 | orchestrator | 2025-11-08 13:47:00 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:00.180551 | orchestrator | 2025-11-08 13:47:00 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:00.181921 | orchestrator | 2025-11-08 13:47:00 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:00.181961 | orchestrator | 2025-11-08 13:47:00 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:03.216181 | orchestrator | 2025-11-08 13:47:03 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:03.218094 | orchestrator | 2025-11-08 13:47:03 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:47:03.219803 | orchestrator | 2025-11-08 13:47:03 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:03.223466 | orchestrator | 2025-11-08 13:47:03 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:03.224556 | orchestrator | 2025-11-08 13:47:03 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:03.224599 | orchestrator | 2025-11-08 13:47:03 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:06.282663 | orchestrator | 2025-11-08 13:47:06 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:06.283030 | orchestrator | 2025-11-08 13:47:06 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:47:06.283968 | orchestrator | 2025-11-08 13:47:06 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:06.284633 | orchestrator | 2025-11-08 13:47:06 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:06.287357 | orchestrator | 2025-11-08 13:47:06 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:06.287439 | orchestrator | 2025-11-08 13:47:06 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:09.329436 | orchestrator | 2025-11-08 13:47:09 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:09.331658 | orchestrator | 2025-11-08 13:47:09 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state STARTED 2025-11-08 13:47:09.333193 | orchestrator | 2025-11-08 13:47:09 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:09.336754 | orchestrator | 2025-11-08 13:47:09 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:09.338399 | orchestrator | 2025-11-08 13:47:09 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:09.338471 | orchestrator | 2025-11-08 13:47:09 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:12.385674 | orchestrator | 2025-11-08 13:47:12 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:12.389129 | orchestrator | 2025-11-08 13:47:12 | INFO  | Task 8b9196bc-1762-48ca-898a-fa842751b481 is in state SUCCESS 2025-11-08 13:47:12.391223 | orchestrator | 2025-11-08 13:47:12.391276 | orchestrator | 2025-11-08 13:47:12.391289 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-11-08 13:47:12.391302 | orchestrator | 2025-11-08 13:47:12.391313 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-11-08 13:47:12.391326 | orchestrator | Saturday 08 November 2025 13:43:03 +0000 (0:00:00.166) 0:00:00.166 ***** 2025-11-08 13:47:12.391337 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:47:12.391350 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:47:12.391361 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:47:12.391372 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.391392 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.391411 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.391429 | orchestrator | 2025-11-08 13:47:12.391447 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-11-08 13:47:12.391466 | orchestrator | Saturday 08 November 2025 13:43:04 +0000 (0:00:00.636) 0:00:00.802 ***** 2025-11-08 13:47:12.391482 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.391504 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.391523 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.391543 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.391574 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.391589 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.391610 | orchestrator | 2025-11-08 13:47:12.391641 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-11-08 13:47:12.391660 | orchestrator | Saturday 08 November 2025 13:43:05 +0000 (0:00:00.704) 0:00:01.506 ***** 2025-11-08 13:47:12.391677 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.391695 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.391713 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.391760 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.391777 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.391794 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.391812 | orchestrator | 2025-11-08 13:47:12.391824 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-11-08 13:47:12.391837 | orchestrator | Saturday 08 November 2025 13:43:05 +0000 (0:00:00.800) 0:00:02.307 ***** 2025-11-08 13:47:12.391849 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:47:12.391861 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:47:12.391874 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.391886 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.391898 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.391910 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:47:12.391929 | orchestrator | 2025-11-08 13:47:12.391956 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-11-08 13:47:12.391976 | orchestrator | Saturday 08 November 2025 13:43:08 +0000 (0:00:02.383) 0:00:04.690 ***** 2025-11-08 13:47:12.391994 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:47:12.392014 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:47:12.392032 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:47:12.392053 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.392070 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.392089 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.392104 | orchestrator | 2025-11-08 13:47:12.392116 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-11-08 13:47:12.392129 | orchestrator | Saturday 08 November 2025 13:43:10 +0000 (0:00:02.071) 0:00:06.761 ***** 2025-11-08 13:47:12.392166 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:47:12.392178 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:47:12.392190 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:47:12.392201 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.392211 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.392222 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.392232 | orchestrator | 2025-11-08 13:47:12.392243 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-11-08 13:47:12.392253 | orchestrator | Saturday 08 November 2025 13:43:11 +0000 (0:00:01.147) 0:00:07.909 ***** 2025-11-08 13:47:12.392264 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.392274 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.392285 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.392295 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.392306 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.392316 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.392326 | orchestrator | 2025-11-08 13:47:12.392337 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-11-08 13:47:12.392348 | orchestrator | Saturday 08 November 2025 13:43:12 +0000 (0:00:00.582) 0:00:08.492 ***** 2025-11-08 13:47:12.392359 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.392369 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.392380 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.392390 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.392401 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.392411 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.392422 | orchestrator | 2025-11-08 13:47:12.392433 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-11-08 13:47:12.392443 | orchestrator | Saturday 08 November 2025 13:43:12 +0000 (0:00:00.544) 0:00:09.036 ***** 2025-11-08 13:47:12.392454 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-08 13:47:12.392464 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-08 13:47:12.392475 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.392486 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-08 13:47:12.392496 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-08 13:47:12.392506 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.392517 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-08 13:47:12.392528 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-08 13:47:12.392538 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.392549 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-08 13:47:12.392577 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-08 13:47:12.392588 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.392599 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-08 13:47:12.392609 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-08 13:47:12.392620 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.392631 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-08 13:47:12.392641 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-08 13:47:12.392652 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.392662 | orchestrator | 2025-11-08 13:47:12.392673 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-11-08 13:47:12.392684 | orchestrator | Saturday 08 November 2025 13:43:13 +0000 (0:00:00.660) 0:00:09.697 ***** 2025-11-08 13:47:12.392694 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.392705 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.392747 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.392765 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.392776 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.392787 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.392797 | orchestrator | 2025-11-08 13:47:12.392808 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-11-08 13:47:12.392821 | orchestrator | Saturday 08 November 2025 13:43:15 +0000 (0:00:01.929) 0:00:11.626 ***** 2025-11-08 13:47:12.392832 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:47:12.392843 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:47:12.392853 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:47:12.392864 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.392874 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.392885 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.392896 | orchestrator | 2025-11-08 13:47:12.392907 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-11-08 13:47:12.392917 | orchestrator | Saturday 08 November 2025 13:43:16 +0000 (0:00:00.837) 0:00:12.464 ***** 2025-11-08 13:47:12.392928 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.392939 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:47:12.392949 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.392960 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:47:12.392970 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:47:12.392981 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.392991 | orchestrator | 2025-11-08 13:47:12.393002 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-11-08 13:47:12.393013 | orchestrator | Saturday 08 November 2025 13:43:21 +0000 (0:00:05.443) 0:00:17.907 ***** 2025-11-08 13:47:12.393024 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.393034 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.393045 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.393055 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.393066 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.393077 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.393087 | orchestrator | 2025-11-08 13:47:12.393098 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-11-08 13:47:12.393108 | orchestrator | Saturday 08 November 2025 13:43:23 +0000 (0:00:02.292) 0:00:20.200 ***** 2025-11-08 13:47:12.393119 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.393130 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.393140 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.393151 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.393161 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.393172 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.393182 | orchestrator | 2025-11-08 13:47:12.393193 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-11-08 13:47:12.393206 | orchestrator | Saturday 08 November 2025 13:43:25 +0000 (0:00:01.815) 0:00:22.015 ***** 2025-11-08 13:47:12.393217 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.393227 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.393238 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.393248 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.393259 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.393269 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.393280 | orchestrator | 2025-11-08 13:47:12.393290 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-11-08 13:47:12.393301 | orchestrator | Saturday 08 November 2025 13:43:27 +0000 (0:00:01.782) 0:00:23.798 ***** 2025-11-08 13:47:12.393312 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-11-08 13:47:12.393323 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-11-08 13:47:12.393333 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.393352 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-11-08 13:47:12.393363 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-11-08 13:47:12.393374 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.393385 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-11-08 13:47:12.393395 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-11-08 13:47:12.393406 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.393528 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-11-08 13:47:12.393545 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-11-08 13:47:12.393556 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.393566 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-11-08 13:47:12.393577 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-11-08 13:47:12.393588 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.393598 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-11-08 13:47:12.393609 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-11-08 13:47:12.393620 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.393631 | orchestrator | 2025-11-08 13:47:12.393642 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-11-08 13:47:12.393663 | orchestrator | Saturday 08 November 2025 13:43:28 +0000 (0:00:01.418) 0:00:25.216 ***** 2025-11-08 13:47:12.393674 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.393685 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.393696 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.393706 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.393751 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.393763 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.393773 | orchestrator | 2025-11-08 13:47:12.393784 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2025-11-08 13:47:12.393795 | orchestrator | Saturday 08 November 2025 13:43:29 +0000 (0:00:00.729) 0:00:25.945 ***** 2025-11-08 13:47:12.393806 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.393816 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.393827 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.393838 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.393848 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.393859 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.393870 | orchestrator | 2025-11-08 13:47:12.393881 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-11-08 13:47:12.393892 | orchestrator | 2025-11-08 13:47:12.393909 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-11-08 13:47:12.393920 | orchestrator | Saturday 08 November 2025 13:43:31 +0000 (0:00:01.515) 0:00:27.460 ***** 2025-11-08 13:47:12.393931 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.393942 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.393952 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.393963 | orchestrator | 2025-11-08 13:47:12.393974 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-11-08 13:47:12.393985 | orchestrator | Saturday 08 November 2025 13:43:33 +0000 (0:00:02.049) 0:00:29.510 ***** 2025-11-08 13:47:12.393995 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.394006 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.394081 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.394096 | orchestrator | 2025-11-08 13:47:12.394107 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-11-08 13:47:12.394118 | orchestrator | Saturday 08 November 2025 13:43:34 +0000 (0:00:01.559) 0:00:31.070 ***** 2025-11-08 13:47:12.394129 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.394140 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.394150 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.394163 | orchestrator | 2025-11-08 13:47:12.394175 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-11-08 13:47:12.394197 | orchestrator | Saturday 08 November 2025 13:43:36 +0000 (0:00:01.371) 0:00:32.441 ***** 2025-11-08 13:47:12.394209 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.394222 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.394234 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.394245 | orchestrator | 2025-11-08 13:47:12.394258 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-11-08 13:47:12.394270 | orchestrator | Saturday 08 November 2025 13:43:36 +0000 (0:00:00.867) 0:00:33.309 ***** 2025-11-08 13:47:12.394283 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.394295 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.394307 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.394318 | orchestrator | 2025-11-08 13:47:12.394330 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-11-08 13:47:12.394342 | orchestrator | Saturday 08 November 2025 13:43:37 +0000 (0:00:00.321) 0:00:33.631 ***** 2025-11-08 13:47:12.394354 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.394366 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.394378 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.394391 | orchestrator | 2025-11-08 13:47:12.394403 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-11-08 13:47:12.394415 | orchestrator | Saturday 08 November 2025 13:43:38 +0000 (0:00:00.907) 0:00:34.538 ***** 2025-11-08 13:47:12.394427 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.394439 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.394451 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.394463 | orchestrator | 2025-11-08 13:47:12.394475 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-11-08 13:47:12.394487 | orchestrator | Saturday 08 November 2025 13:43:39 +0000 (0:00:01.769) 0:00:36.308 ***** 2025-11-08 13:47:12.394499 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:47:12.394512 | orchestrator | 2025-11-08 13:47:12.394522 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-11-08 13:47:12.394533 | orchestrator | Saturday 08 November 2025 13:43:40 +0000 (0:00:00.441) 0:00:36.749 ***** 2025-11-08 13:47:12.394544 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.394555 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.394565 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.394576 | orchestrator | 2025-11-08 13:47:12.394587 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-11-08 13:47:12.394597 | orchestrator | Saturday 08 November 2025 13:43:43 +0000 (0:00:02.837) 0:00:39.586 ***** 2025-11-08 13:47:12.394608 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.394619 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.394629 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.394640 | orchestrator | 2025-11-08 13:47:12.394651 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-11-08 13:47:12.394661 | orchestrator | Saturday 08 November 2025 13:43:43 +0000 (0:00:00.687) 0:00:40.274 ***** 2025-11-08 13:47:12.394672 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.394682 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.394693 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.394703 | orchestrator | 2025-11-08 13:47:12.394714 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-11-08 13:47:12.394744 | orchestrator | Saturday 08 November 2025 13:43:44 +0000 (0:00:00.769) 0:00:41.044 ***** 2025-11-08 13:47:12.394755 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.394765 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.394776 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.394787 | orchestrator | 2025-11-08 13:47:12.394798 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-11-08 13:47:12.394817 | orchestrator | Saturday 08 November 2025 13:43:46 +0000 (0:00:01.899) 0:00:42.943 ***** 2025-11-08 13:47:12.394836 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.394847 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.394858 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.394868 | orchestrator | 2025-11-08 13:47:12.394879 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-11-08 13:47:12.394890 | orchestrator | Saturday 08 November 2025 13:43:47 +0000 (0:00:00.648) 0:00:43.591 ***** 2025-11-08 13:47:12.394901 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.394911 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.394922 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.394933 | orchestrator | 2025-11-08 13:47:12.394944 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-11-08 13:47:12.394955 | orchestrator | Saturday 08 November 2025 13:43:47 +0000 (0:00:00.486) 0:00:44.078 ***** 2025-11-08 13:47:12.394966 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.394976 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.394987 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.394998 | orchestrator | 2025-11-08 13:47:12.395014 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2025-11-08 13:47:12.395025 | orchestrator | Saturday 08 November 2025 13:43:49 +0000 (0:00:01.756) 0:00:45.834 ***** 2025-11-08 13:47:12.395036 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.395047 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.395058 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.395068 | orchestrator | 2025-11-08 13:47:12.395079 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2025-11-08 13:47:12.395090 | orchestrator | Saturday 08 November 2025 13:43:52 +0000 (0:00:02.891) 0:00:48.725 ***** 2025-11-08 13:47:12.395101 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.395112 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.395122 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.395133 | orchestrator | 2025-11-08 13:47:12.395144 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-11-08 13:47:12.395155 | orchestrator | Saturday 08 November 2025 13:43:52 +0000 (0:00:00.615) 0:00:49.341 ***** 2025-11-08 13:47:12.395166 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-08 13:47:12.395178 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-08 13:47:12.395189 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-08 13:47:12.395201 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-08 13:47:12.395212 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-08 13:47:12.395222 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-08 13:47:12.395233 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-08 13:47:12.395244 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-08 13:47:12.395255 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-08 13:47:12.395265 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-08 13:47:12.395276 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-08 13:47:12.395295 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-08 13:47:12.395306 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.395316 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.395327 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.395338 | orchestrator | 2025-11-08 13:47:12.395349 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-11-08 13:47:12.395360 | orchestrator | Saturday 08 November 2025 13:44:36 +0000 (0:00:43.370) 0:01:32.712 ***** 2025-11-08 13:47:12.395371 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.395382 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.395392 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.395508 | orchestrator | 2025-11-08 13:47:12.395525 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-11-08 13:47:12.395536 | orchestrator | Saturday 08 November 2025 13:44:36 +0000 (0:00:00.286) 0:01:32.998 ***** 2025-11-08 13:47:12.395547 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.395558 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.395568 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.395579 | orchestrator | 2025-11-08 13:47:12.395688 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-11-08 13:47:12.395742 | orchestrator | Saturday 08 November 2025 13:44:37 +0000 (0:00:00.997) 0:01:33.996 ***** 2025-11-08 13:47:12.395763 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.395781 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.395799 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.395818 | orchestrator | 2025-11-08 13:47:12.395848 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-11-08 13:47:12.395865 | orchestrator | Saturday 08 November 2025 13:44:38 +0000 (0:00:01.210) 0:01:35.206 ***** 2025-11-08 13:47:12.395890 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.395912 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.395929 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.395947 | orchestrator | 2025-11-08 13:47:12.395964 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-11-08 13:47:12.395982 | orchestrator | Saturday 08 November 2025 13:45:05 +0000 (0:00:27.028) 0:02:02.235 ***** 2025-11-08 13:47:12.396000 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.396019 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.396037 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.396057 | orchestrator | 2025-11-08 13:47:12.396075 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-11-08 13:47:12.396093 | orchestrator | Saturday 08 November 2025 13:45:06 +0000 (0:00:00.689) 0:02:02.924 ***** 2025-11-08 13:47:12.396105 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.396116 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.396126 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.396146 | orchestrator | 2025-11-08 13:47:12.396157 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-11-08 13:47:12.396168 | orchestrator | Saturday 08 November 2025 13:45:07 +0000 (0:00:00.664) 0:02:03.588 ***** 2025-11-08 13:47:12.396179 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.396189 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.396200 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.396211 | orchestrator | 2025-11-08 13:47:12.396221 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-11-08 13:47:12.396232 | orchestrator | Saturday 08 November 2025 13:45:07 +0000 (0:00:00.701) 0:02:04.290 ***** 2025-11-08 13:47:12.396243 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.396254 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.396265 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.396276 | orchestrator | 2025-11-08 13:47:12.396287 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-11-08 13:47:12.396311 | orchestrator | Saturday 08 November 2025 13:45:08 +0000 (0:00:00.897) 0:02:05.187 ***** 2025-11-08 13:47:12.396324 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.396336 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.396348 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.396359 | orchestrator | 2025-11-08 13:47:12.396371 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-11-08 13:47:12.396384 | orchestrator | Saturday 08 November 2025 13:45:09 +0000 (0:00:00.441) 0:02:05.629 ***** 2025-11-08 13:47:12.396395 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.396407 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.396419 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.396432 | orchestrator | 2025-11-08 13:47:12.396444 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-11-08 13:47:12.396456 | orchestrator | Saturday 08 November 2025 13:45:10 +0000 (0:00:00.788) 0:02:06.417 ***** 2025-11-08 13:47:12.396468 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.396480 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.396492 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.396504 | orchestrator | 2025-11-08 13:47:12.396516 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-11-08 13:47:12.396528 | orchestrator | Saturday 08 November 2025 13:45:10 +0000 (0:00:00.687) 0:02:07.105 ***** 2025-11-08 13:47:12.396540 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.396552 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.396564 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.396576 | orchestrator | 2025-11-08 13:47:12.396588 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-11-08 13:47:12.396600 | orchestrator | Saturday 08 November 2025 13:45:11 +0000 (0:00:01.181) 0:02:08.287 ***** 2025-11-08 13:47:12.396612 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:47:12.396624 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:47:12.396635 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:47:12.396648 | orchestrator | 2025-11-08 13:47:12.396660 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-11-08 13:47:12.396671 | orchestrator | Saturday 08 November 2025 13:45:12 +0000 (0:00:00.990) 0:02:09.277 ***** 2025-11-08 13:47:12.396682 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.396693 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.396703 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.396714 | orchestrator | 2025-11-08 13:47:12.396929 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-11-08 13:47:12.396946 | orchestrator | Saturday 08 November 2025 13:45:13 +0000 (0:00:00.292) 0:02:09.570 ***** 2025-11-08 13:47:12.396957 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.396967 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.396978 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.396989 | orchestrator | 2025-11-08 13:47:12.397000 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-11-08 13:47:12.397010 | orchestrator | Saturday 08 November 2025 13:45:13 +0000 (0:00:00.284) 0:02:09.855 ***** 2025-11-08 13:47:12.397021 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.397032 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.397043 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.397053 | orchestrator | 2025-11-08 13:47:12.397064 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-11-08 13:47:12.397075 | orchestrator | Saturday 08 November 2025 13:45:14 +0000 (0:00:00.854) 0:02:10.709 ***** 2025-11-08 13:47:12.397085 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.397096 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.397107 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.397117 | orchestrator | 2025-11-08 13:47:12.397129 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-11-08 13:47:12.397141 | orchestrator | Saturday 08 November 2025 13:45:14 +0000 (0:00:00.617) 0:02:11.326 ***** 2025-11-08 13:47:12.397164 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-08 13:47:12.397189 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-08 13:47:12.397200 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-08 13:47:12.397212 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-08 13:47:12.397223 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-08 13:47:12.397234 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-08 13:47:12.397244 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-08 13:47:12.397256 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-08 13:47:12.397266 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-08 13:47:12.397284 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-11-08 13:47:12.397296 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-08 13:47:12.397304 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-08 13:47:12.397311 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-08 13:47:12.397319 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-11-08 13:47:12.397327 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-08 13:47:12.397335 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-08 13:47:12.397342 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-08 13:47:12.397350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-08 13:47:12.397358 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-08 13:47:12.397366 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-08 13:47:12.397373 | orchestrator | 2025-11-08 13:47:12.397381 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-11-08 13:47:12.397389 | orchestrator | 2025-11-08 13:47:12.397397 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-11-08 13:47:12.397405 | orchestrator | Saturday 08 November 2025 13:45:18 +0000 (0:00:03.098) 0:02:14.425 ***** 2025-11-08 13:47:12.397413 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:47:12.397420 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:47:12.397428 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:47:12.397436 | orchestrator | 2025-11-08 13:47:12.397444 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-11-08 13:47:12.397452 | orchestrator | Saturday 08 November 2025 13:45:18 +0000 (0:00:00.428) 0:02:14.854 ***** 2025-11-08 13:47:12.397460 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:47:12.397468 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:47:12.397475 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:47:12.397483 | orchestrator | 2025-11-08 13:47:12.397491 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-11-08 13:47:12.397499 | orchestrator | Saturday 08 November 2025 13:45:19 +0000 (0:00:00.614) 0:02:15.468 ***** 2025-11-08 13:47:12.397506 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:47:12.397514 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:47:12.397522 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:47:12.397529 | orchestrator | 2025-11-08 13:47:12.397547 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-11-08 13:47:12.397555 | orchestrator | Saturday 08 November 2025 13:45:19 +0000 (0:00:00.298) 0:02:15.767 ***** 2025-11-08 13:47:12.397564 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:47:12.397571 | orchestrator | 2025-11-08 13:47:12.397579 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-11-08 13:47:12.397587 | orchestrator | Saturday 08 November 2025 13:45:19 +0000 (0:00:00.607) 0:02:16.374 ***** 2025-11-08 13:47:12.397595 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.397602 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.397610 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.397618 | orchestrator | 2025-11-08 13:47:12.397626 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-11-08 13:47:12.397633 | orchestrator | Saturday 08 November 2025 13:45:20 +0000 (0:00:00.277) 0:02:16.652 ***** 2025-11-08 13:47:12.397641 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.397649 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.397657 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.397664 | orchestrator | 2025-11-08 13:47:12.397672 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-11-08 13:47:12.397680 | orchestrator | Saturday 08 November 2025 13:45:20 +0000 (0:00:00.320) 0:02:16.972 ***** 2025-11-08 13:47:12.397688 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:47:12.397695 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:47:12.397703 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:47:12.397711 | orchestrator | 2025-11-08 13:47:12.397734 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-11-08 13:47:12.397742 | orchestrator | Saturday 08 November 2025 13:45:20 +0000 (0:00:00.292) 0:02:17.265 ***** 2025-11-08 13:47:12.397750 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:47:12.397758 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:47:12.397766 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:47:12.397773 | orchestrator | 2025-11-08 13:47:12.397787 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-11-08 13:47:12.397795 | orchestrator | Saturday 08 November 2025 13:45:21 +0000 (0:00:00.736) 0:02:18.001 ***** 2025-11-08 13:47:12.397803 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:47:12.397811 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:47:12.397819 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:47:12.397827 | orchestrator | 2025-11-08 13:47:12.397834 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-11-08 13:47:12.397842 | orchestrator | Saturday 08 November 2025 13:45:22 +0000 (0:00:01.090) 0:02:19.091 ***** 2025-11-08 13:47:12.397850 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:47:12.397858 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:47:12.397866 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:47:12.397873 | orchestrator | 2025-11-08 13:47:12.397881 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-11-08 13:47:12.397889 | orchestrator | Saturday 08 November 2025 13:45:23 +0000 (0:00:01.156) 0:02:20.248 ***** 2025-11-08 13:47:12.397897 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:47:12.397909 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:47:12.397917 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:47:12.397925 | orchestrator | 2025-11-08 13:47:12.397933 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-11-08 13:47:12.397940 | orchestrator | 2025-11-08 13:47:12.397948 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-11-08 13:47:12.397956 | orchestrator | Saturday 08 November 2025 13:45:34 +0000 (0:00:10.763) 0:02:31.011 ***** 2025-11-08 13:47:12.397964 | orchestrator | ok: [testbed-manager] 2025-11-08 13:47:12.397972 | orchestrator | 2025-11-08 13:47:12.397980 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-11-08 13:47:12.397993 | orchestrator | Saturday 08 November 2025 13:45:35 +0000 (0:00:01.175) 0:02:32.186 ***** 2025-11-08 13:47:12.398000 | orchestrator | changed: [testbed-manager] 2025-11-08 13:47:12.398008 | orchestrator | 2025-11-08 13:47:12.398044 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-08 13:47:12.398054 | orchestrator | Saturday 08 November 2025 13:45:36 +0000 (0:00:00.485) 0:02:32.672 ***** 2025-11-08 13:47:12.398062 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-08 13:47:12.398070 | orchestrator | 2025-11-08 13:47:12.398078 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-08 13:47:12.398085 | orchestrator | Saturday 08 November 2025 13:45:36 +0000 (0:00:00.578) 0:02:33.251 ***** 2025-11-08 13:47:12.398093 | orchestrator | changed: [testbed-manager] 2025-11-08 13:47:12.398101 | orchestrator | 2025-11-08 13:47:12.398109 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-11-08 13:47:12.398116 | orchestrator | Saturday 08 November 2025 13:45:37 +0000 (0:00:00.885) 0:02:34.137 ***** 2025-11-08 13:47:12.398124 | orchestrator | changed: [testbed-manager] 2025-11-08 13:47:12.398132 | orchestrator | 2025-11-08 13:47:12.398140 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-11-08 13:47:12.398148 | orchestrator | Saturday 08 November 2025 13:45:38 +0000 (0:00:00.629) 0:02:34.766 ***** 2025-11-08 13:47:12.398156 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-08 13:47:12.398164 | orchestrator | 2025-11-08 13:47:12.398171 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-11-08 13:47:12.398179 | orchestrator | Saturday 08 November 2025 13:45:40 +0000 (0:00:01.806) 0:02:36.573 ***** 2025-11-08 13:47:12.398187 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-08 13:47:12.398195 | orchestrator | 2025-11-08 13:47:12.398202 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-11-08 13:47:12.398210 | orchestrator | Saturday 08 November 2025 13:45:41 +0000 (0:00:01.244) 0:02:37.817 ***** 2025-11-08 13:47:12.398218 | orchestrator | changed: [testbed-manager] 2025-11-08 13:47:12.398226 | orchestrator | 2025-11-08 13:47:12.398234 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-11-08 13:47:12.398242 | orchestrator | Saturday 08 November 2025 13:45:41 +0000 (0:00:00.557) 0:02:38.375 ***** 2025-11-08 13:47:12.398249 | orchestrator | changed: [testbed-manager] 2025-11-08 13:47:12.398257 | orchestrator | 2025-11-08 13:47:12.398265 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-11-08 13:47:12.398273 | orchestrator | 2025-11-08 13:47:12.398280 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-11-08 13:47:12.398288 | orchestrator | Saturday 08 November 2025 13:45:42 +0000 (0:00:01.005) 0:02:39.380 ***** 2025-11-08 13:47:12.398296 | orchestrator | ok: [testbed-manager] 2025-11-08 13:47:12.398304 | orchestrator | 2025-11-08 13:47:12.398311 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-11-08 13:47:12.398319 | orchestrator | Saturday 08 November 2025 13:45:43 +0000 (0:00:00.195) 0:02:39.575 ***** 2025-11-08 13:47:12.398327 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-11-08 13:47:12.398335 | orchestrator | 2025-11-08 13:47:12.398342 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-11-08 13:47:12.398350 | orchestrator | Saturday 08 November 2025 13:45:43 +0000 (0:00:00.256) 0:02:39.831 ***** 2025-11-08 13:47:12.398358 | orchestrator | ok: [testbed-manager] 2025-11-08 13:47:12.398366 | orchestrator | 2025-11-08 13:47:12.398374 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-11-08 13:47:12.398381 | orchestrator | Saturday 08 November 2025 13:45:44 +0000 (0:00:01.052) 0:02:40.884 ***** 2025-11-08 13:47:12.398389 | orchestrator | ok: [testbed-manager] 2025-11-08 13:47:12.398397 | orchestrator | 2025-11-08 13:47:12.398405 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-11-08 13:47:12.398412 | orchestrator | Saturday 08 November 2025 13:45:46 +0000 (0:00:01.849) 0:02:42.734 ***** 2025-11-08 13:47:12.398425 | orchestrator | changed: [testbed-manager] 2025-11-08 13:47:12.398433 | orchestrator | 2025-11-08 13:47:12.398441 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-11-08 13:47:12.398449 | orchestrator | Saturday 08 November 2025 13:45:47 +0000 (0:00:01.325) 0:02:44.060 ***** 2025-11-08 13:47:12.398457 | orchestrator | ok: [testbed-manager] 2025-11-08 13:47:12.398465 | orchestrator | 2025-11-08 13:47:12.398478 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-11-08 13:47:12.398486 | orchestrator | Saturday 08 November 2025 13:45:48 +0000 (0:00:00.746) 0:02:44.806 ***** 2025-11-08 13:47:12.398494 | orchestrator | changed: [testbed-manager] 2025-11-08 13:47:12.398501 | orchestrator | 2025-11-08 13:47:12.398509 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-11-08 13:47:12.398517 | orchestrator | Saturday 08 November 2025 13:45:56 +0000 (0:00:08.130) 0:02:52.937 ***** 2025-11-08 13:47:12.398525 | orchestrator | changed: [testbed-manager] 2025-11-08 13:47:12.398533 | orchestrator | 2025-11-08 13:47:12.398540 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-11-08 13:47:12.398548 | orchestrator | Saturday 08 November 2025 13:46:13 +0000 (0:00:17.417) 0:03:10.355 ***** 2025-11-08 13:47:12.398556 | orchestrator | ok: [testbed-manager] 2025-11-08 13:47:12.398563 | orchestrator | 2025-11-08 13:47:12.398572 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-11-08 13:47:12.398579 | orchestrator | 2025-11-08 13:47:12.398587 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-11-08 13:47:12.398599 | orchestrator | Saturday 08 November 2025 13:46:14 +0000 (0:00:00.553) 0:03:10.908 ***** 2025-11-08 13:47:12.398607 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:47:12.398614 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:47:12.398622 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:47:12.398630 | orchestrator | 2025-11-08 13:47:12.398638 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-11-08 13:47:12.398645 | orchestrator | Saturday 08 November 2025 13:46:14 +0000 (0:00:00.368) 0:03:11.276 ***** 2025-11-08 13:47:12.398653 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.398661 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:47:12.398669 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:47:12.398677 | orchestrator | 2025-11-08 13:47:12.398684 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-11-08 13:47:12.398692 | orchestrator | Saturday 08 November 2025 13:46:15 +0000 (0:00:00.365) 0:03:11.642 ***** 2025-11-08 13:47:12.398700 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:47:12.398708 | orchestrator | 2025-11-08 13:47:12.398751 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-11-08 13:47:12.398760 | orchestrator | Saturday 08 November 2025 13:46:16 +0000 (0:00:00.931) 0:03:12.573 ***** 2025-11-08 13:47:12.398768 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 13:47:12.398776 | orchestrator | 2025-11-08 13:47:12.398783 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-11-08 13:47:12.398791 | orchestrator | Saturday 08 November 2025 13:46:17 +0000 (0:00:01.127) 0:03:13.701 ***** 2025-11-08 13:47:12.398799 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.398807 | orchestrator | 2025-11-08 13:47:12.398814 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-11-08 13:47:12.398822 | orchestrator | Saturday 08 November 2025 13:46:17 +0000 (0:00:00.138) 0:03:13.839 ***** 2025-11-08 13:47:12.398830 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 13:47:12.398837 | orchestrator | 2025-11-08 13:47:12.398845 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-11-08 13:47:12.398853 | orchestrator | Saturday 08 November 2025 13:46:18 +0000 (0:00:01.083) 0:03:14.922 ***** 2025-11-08 13:47:12.398861 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.398874 | orchestrator | 2025-11-08 13:47:12.398882 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-11-08 13:47:12.398890 | orchestrator | Saturday 08 November 2025 13:46:18 +0000 (0:00:00.156) 0:03:15.079 ***** 2025-11-08 13:47:12.398898 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.398905 | orchestrator | 2025-11-08 13:47:12.398913 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-11-08 13:47:12.398921 | orchestrator | Saturday 08 November 2025 13:46:18 +0000 (0:00:00.192) 0:03:15.271 ***** 2025-11-08 13:47:12.398929 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.398936 | orchestrator | 2025-11-08 13:47:12.398944 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-11-08 13:47:12.398952 | orchestrator | Saturday 08 November 2025 13:46:19 +0000 (0:00:00.169) 0:03:15.441 ***** 2025-11-08 13:47:12.398960 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:47:12.398967 | orchestrator | 2025-11-08 13:47:12.398975 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-11-08 13:47:12.398983 | orchestrator | Saturday 08 November 2025 13:46:19 +0000 (0:00:00.176) 0:03:15.617 ***** 2025-11-08 13:47:12.398991 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-11-08 13:47:12.398998 | orchestrator | 2025-11-08 13:47:12.399006 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-11-08 13:47:12.399014 | orchestrator | Saturday 08 November 2025 13:46:24 +0000 (0:00:05.624) 0:03:21.241 ***** 2025-11-08 13:47:12.399022 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-11-08 13:47:12.399029 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-11-08 13:47:12.399038 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-11-08 13:47:12.399045 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-11-08 13:47:12.399053 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-11-08 13:47:12.399061 | orchestrator | 2025-11-08 13:47:12.399069 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-11-08 13:47:12.399076 | orchestrator | Saturday 08 November 2025 13:47:07 +0000 (0:00:42.694) 0:04:03.936 ***** 2025-11-08 13:47:12.399084 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 13:47:12.399092 | orchestrator | 2025-11-08 13:47:12.399099 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-11-08 13:47:12.399107 | orchestrator | Saturday 08 November 2025 13:47:09 +0000 (0:00:02.113) 0:04:06.050 ***** 2025-11-08 13:47:12.399120 | orchestrator | fatal: [testbed-node-0 -> localhost]: FAILED! => {"changed": false, "checksum": "e067333911ec303b1abbababa17374a0629c5a29", "msg": "Destination directory /tmp/k3s does not exist"} 2025-11-08 13:47:12.399130 | orchestrator | 2025-11-08 13:47:12.399138 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:47:12.399146 | orchestrator | testbed-manager : ok=18  changed=10  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:47:12.399154 | orchestrator | testbed-node-0 : ok=43  changed=20  unreachable=0 failed=1  skipped=24  rescued=0 ignored=0 2025-11-08 13:47:12.399164 | orchestrator | testbed-node-1 : ok=35  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-08 13:47:12.399177 | orchestrator | testbed-node-2 : ok=35  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-08 13:47:12.399193 | orchestrator | testbed-node-3 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-08 13:47:12.399201 | orchestrator | testbed-node-4 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-08 13:47:12.399214 | orchestrator | testbed-node-5 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-08 13:47:12.399222 | orchestrator | 2025-11-08 13:47:12.399229 | orchestrator | 2025-11-08 13:47:12.399237 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:47:12.399245 | orchestrator | Saturday 08 November 2025 13:47:11 +0000 (0:00:01.812) 0:04:07.862 ***** 2025-11-08 13:47:12.399253 | orchestrator | =============================================================================== 2025-11-08 13:47:12.399261 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.37s 2025-11-08 13:47:12.399269 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.69s 2025-11-08 13:47:12.399277 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.03s 2025-11-08 13:47:12.399285 | orchestrator | kubectl : Install required packages ------------------------------------ 17.42s 2025-11-08 13:47:12.399292 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.76s 2025-11-08 13:47:12.399300 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.13s 2025-11-08 13:47:12.399308 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.62s 2025-11-08 13:47:12.399315 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.44s 2025-11-08 13:47:12.399323 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.10s 2025-11-08 13:47:12.399331 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.89s 2025-11-08 13:47:12.399339 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.84s 2025-11-08 13:47:12.399347 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.38s 2025-11-08 13:47:12.399354 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.29s 2025-11-08 13:47:12.399362 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.11s 2025-11-08 13:47:12.399370 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.07s 2025-11-08 13:47:12.399378 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.05s 2025-11-08 13:47:12.399385 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.93s 2025-11-08 13:47:12.399393 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.90s 2025-11-08 13:47:12.399401 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.85s 2025-11-08 13:47:12.399408 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.82s 2025-11-08 13:47:12.399416 | orchestrator | 2025-11-08 13:47:12 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:12.399424 | orchestrator | 2025-11-08 13:47:12 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:12.399432 | orchestrator | 2025-11-08 13:47:12 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:12.399440 | orchestrator | 2025-11-08 13:47:12 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:15.459899 | orchestrator | 2025-11-08 13:47:15 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:15.460655 | orchestrator | 2025-11-08 13:47:15 | INFO  | Task d023e4b0-c487-4d99-be79-2f9635706f6f is in state STARTED 2025-11-08 13:47:15.463388 | orchestrator | 2025-11-08 13:47:15 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:15.464418 | orchestrator | 2025-11-08 13:47:15 | INFO  | Task 34ac85cf-2d8d-41b1-8e63-4fb3fe7d3b2d is in state STARTED 2025-11-08 13:47:15.465541 | orchestrator | 2025-11-08 13:47:15 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:15.465836 | orchestrator | 2025-11-08 13:47:15 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:15.466013 | orchestrator | 2025-11-08 13:47:15 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:18.564152 | orchestrator | 2025-11-08 13:47:18 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:18.574383 | orchestrator | 2025-11-08 13:47:18 | INFO  | Task d023e4b0-c487-4d99-be79-2f9635706f6f is in state STARTED 2025-11-08 13:47:18.576830 | orchestrator | 2025-11-08 13:47:18 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:18.583643 | orchestrator | 2025-11-08 13:47:18 | INFO  | Task 34ac85cf-2d8d-41b1-8e63-4fb3fe7d3b2d is in state STARTED 2025-11-08 13:47:18.583754 | orchestrator | 2025-11-08 13:47:18 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:18.584433 | orchestrator | 2025-11-08 13:47:18 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:18.585187 | orchestrator | 2025-11-08 13:47:18 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:21.659396 | orchestrator | 2025-11-08 13:47:21 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:21.660175 | orchestrator | 2025-11-08 13:47:21 | INFO  | Task d023e4b0-c487-4d99-be79-2f9635706f6f is in state STARTED 2025-11-08 13:47:21.661033 | orchestrator | 2025-11-08 13:47:21 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:21.661640 | orchestrator | 2025-11-08 13:47:21 | INFO  | Task 34ac85cf-2d8d-41b1-8e63-4fb3fe7d3b2d is in state SUCCESS 2025-11-08 13:47:21.662546 | orchestrator | 2025-11-08 13:47:21 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:21.663556 | orchestrator | 2025-11-08 13:47:21 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:21.663641 | orchestrator | 2025-11-08 13:47:21 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:24.697202 | orchestrator | 2025-11-08 13:47:24 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:24.697976 | orchestrator | 2025-11-08 13:47:24 | INFO  | Task d023e4b0-c487-4d99-be79-2f9635706f6f is in state STARTED 2025-11-08 13:47:24.700342 | orchestrator | 2025-11-08 13:47:24 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:24.702137 | orchestrator | 2025-11-08 13:47:24 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:24.703759 | orchestrator | 2025-11-08 13:47:24 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:24.703957 | orchestrator | 2025-11-08 13:47:24 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:27.741587 | orchestrator | 2025-11-08 13:47:27 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:27.742131 | orchestrator | 2025-11-08 13:47:27 | INFO  | Task d023e4b0-c487-4d99-be79-2f9635706f6f is in state SUCCESS 2025-11-08 13:47:27.742624 | orchestrator | 2025-11-08 13:47:27 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:27.743511 | orchestrator | 2025-11-08 13:47:27 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:27.744575 | orchestrator | 2025-11-08 13:47:27 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:27.744760 | orchestrator | 2025-11-08 13:47:27 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:30.801344 | orchestrator | 2025-11-08 13:47:30 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:30.802084 | orchestrator | 2025-11-08 13:47:30 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:30.803089 | orchestrator | 2025-11-08 13:47:30 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:30.804866 | orchestrator | 2025-11-08 13:47:30 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:30.804935 | orchestrator | 2025-11-08 13:47:30 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:33.865796 | orchestrator | 2025-11-08 13:47:33 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:33.867795 | orchestrator | 2025-11-08 13:47:33 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:33.872521 | orchestrator | 2025-11-08 13:47:33 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:33.873893 | orchestrator | 2025-11-08 13:47:33 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:33.874198 | orchestrator | 2025-11-08 13:47:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:36.925086 | orchestrator | 2025-11-08 13:47:36 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:36.930981 | orchestrator | 2025-11-08 13:47:36 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:36.933774 | orchestrator | 2025-11-08 13:47:36 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:36.935588 | orchestrator | 2025-11-08 13:47:36 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:36.935942 | orchestrator | 2025-11-08 13:47:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:39.973244 | orchestrator | 2025-11-08 13:47:39 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:39.976367 | orchestrator | 2025-11-08 13:47:39 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:39.979459 | orchestrator | 2025-11-08 13:47:39 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:39.982530 | orchestrator | 2025-11-08 13:47:39 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:39.982685 | orchestrator | 2025-11-08 13:47:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:43.071074 | orchestrator | 2025-11-08 13:47:43 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:43.072451 | orchestrator | 2025-11-08 13:47:43 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:43.073999 | orchestrator | 2025-11-08 13:47:43 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:43.075522 | orchestrator | 2025-11-08 13:47:43 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:43.075558 | orchestrator | 2025-11-08 13:47:43 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:46.121133 | orchestrator | 2025-11-08 13:47:46 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:46.122116 | orchestrator | 2025-11-08 13:47:46 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:46.124071 | orchestrator | 2025-11-08 13:47:46 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:46.125763 | orchestrator | 2025-11-08 13:47:46 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:46.126071 | orchestrator | 2025-11-08 13:47:46 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:49.165584 | orchestrator | 2025-11-08 13:47:49 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:49.397816 | orchestrator | 2025-11-08 13:47:49 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:49.397896 | orchestrator | 2025-11-08 13:47:49 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:49.397910 | orchestrator | 2025-11-08 13:47:49 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:49.397922 | orchestrator | 2025-11-08 13:47:49 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:52.198688 | orchestrator | 2025-11-08 13:47:52 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:52.199047 | orchestrator | 2025-11-08 13:47:52 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:52.199802 | orchestrator | 2025-11-08 13:47:52 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:52.201122 | orchestrator | 2025-11-08 13:47:52 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:52.201176 | orchestrator | 2025-11-08 13:47:52 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:55.247518 | orchestrator | 2025-11-08 13:47:55 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:55.250280 | orchestrator | 2025-11-08 13:47:55 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:55.253438 | orchestrator | 2025-11-08 13:47:55 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:55.255588 | orchestrator | 2025-11-08 13:47:55 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:55.255631 | orchestrator | 2025-11-08 13:47:55 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:47:58.291491 | orchestrator | 2025-11-08 13:47:58 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:47:58.294686 | orchestrator | 2025-11-08 13:47:58 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:47:58.296795 | orchestrator | 2025-11-08 13:47:58 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:47:58.298067 | orchestrator | 2025-11-08 13:47:58 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:47:58.298102 | orchestrator | 2025-11-08 13:47:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:01.330513 | orchestrator | 2025-11-08 13:48:01 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:01.331510 | orchestrator | 2025-11-08 13:48:01 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:01.335060 | orchestrator | 2025-11-08 13:48:01 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:01.337126 | orchestrator | 2025-11-08 13:48:01 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:48:01.337195 | orchestrator | 2025-11-08 13:48:01 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:04.377136 | orchestrator | 2025-11-08 13:48:04 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:04.381906 | orchestrator | 2025-11-08 13:48:04 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:04.383027 | orchestrator | 2025-11-08 13:48:04 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:04.383620 | orchestrator | 2025-11-08 13:48:04 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:48:04.383639 | orchestrator | 2025-11-08 13:48:04 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:07.432230 | orchestrator | 2025-11-08 13:48:07 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:07.432352 | orchestrator | 2025-11-08 13:48:07 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:07.433872 | orchestrator | 2025-11-08 13:48:07 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:07.436541 | orchestrator | 2025-11-08 13:48:07 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:48:07.436581 | orchestrator | 2025-11-08 13:48:07 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:10.571460 | orchestrator | 2025-11-08 13:48:10 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:10.571811 | orchestrator | 2025-11-08 13:48:10 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:10.572498 | orchestrator | 2025-11-08 13:48:10 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:10.573312 | orchestrator | 2025-11-08 13:48:10 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:48:10.573326 | orchestrator | 2025-11-08 13:48:10 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:13.605769 | orchestrator | 2025-11-08 13:48:13 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:13.605884 | orchestrator | 2025-11-08 13:48:13 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:13.606434 | orchestrator | 2025-11-08 13:48:13 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:13.607283 | orchestrator | 2025-11-08 13:48:13 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:48:13.607307 | orchestrator | 2025-11-08 13:48:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:16.635302 | orchestrator | 2025-11-08 13:48:16 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:16.635416 | orchestrator | 2025-11-08 13:48:16 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:16.639039 | orchestrator | 2025-11-08 13:48:16 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:16.639066 | orchestrator | 2025-11-08 13:48:16 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state STARTED 2025-11-08 13:48:16.639077 | orchestrator | 2025-11-08 13:48:16 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:19.677340 | orchestrator | 2025-11-08 13:48:19 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:19.679000 | orchestrator | 2025-11-08 13:48:19 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:19.680462 | orchestrator | 2025-11-08 13:48:19 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:19.682186 | orchestrator | 2025-11-08 13:48:19 | INFO  | Task 006d2d6e-32ff-4f7b-bd59-24b7af649186 is in state SUCCESS 2025-11-08 13:48:19.684467 | orchestrator | 2025-11-08 13:48:19.684535 | orchestrator | 2025-11-08 13:48:19.684552 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-11-08 13:48:19.684573 | orchestrator | 2025-11-08 13:48:19.684593 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-08 13:48:19.684630 | orchestrator | Saturday 08 November 2025 13:47:17 +0000 (0:00:00.146) 0:00:00.146 ***** 2025-11-08 13:48:19.684651 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-08 13:48:19.684672 | orchestrator | 2025-11-08 13:48:19.684691 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-08 13:48:19.684768 | orchestrator | Saturday 08 November 2025 13:47:17 +0000 (0:00:00.842) 0:00:00.989 ***** 2025-11-08 13:48:19.684790 | orchestrator | changed: [testbed-manager] 2025-11-08 13:48:19.684810 | orchestrator | 2025-11-08 13:48:19.684830 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-11-08 13:48:19.684849 | orchestrator | Saturday 08 November 2025 13:47:19 +0000 (0:00:01.510) 0:00:02.499 ***** 2025-11-08 13:48:19.684860 | orchestrator | changed: [testbed-manager] 2025-11-08 13:48:19.684871 | orchestrator | 2025-11-08 13:48:19.684882 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:48:19.684893 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:48:19.684906 | orchestrator | 2025-11-08 13:48:19.684917 | orchestrator | 2025-11-08 13:48:19.684927 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:48:19.684938 | orchestrator | Saturday 08 November 2025 13:47:20 +0000 (0:00:00.609) 0:00:03.108 ***** 2025-11-08 13:48:19.684951 | orchestrator | =============================================================================== 2025-11-08 13:48:19.684970 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.51s 2025-11-08 13:48:19.684988 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.84s 2025-11-08 13:48:19.685007 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.61s 2025-11-08 13:48:19.685025 | orchestrator | 2025-11-08 13:48:19.685044 | orchestrator | 2025-11-08 13:48:19.685063 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-11-08 13:48:19.685081 | orchestrator | 2025-11-08 13:48:19.685100 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-11-08 13:48:19.685119 | orchestrator | Saturday 08 November 2025 13:47:18 +0000 (0:00:00.138) 0:00:00.138 ***** 2025-11-08 13:48:19.685140 | orchestrator | ok: [testbed-manager] 2025-11-08 13:48:19.685160 | orchestrator | 2025-11-08 13:48:19.685175 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-11-08 13:48:19.685188 | orchestrator | Saturday 08 November 2025 13:47:18 +0000 (0:00:00.766) 0:00:00.904 ***** 2025-11-08 13:48:19.685200 | orchestrator | ok: [testbed-manager] 2025-11-08 13:48:19.685212 | orchestrator | 2025-11-08 13:48:19.685223 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-08 13:48:19.685236 | orchestrator | Saturday 08 November 2025 13:47:19 +0000 (0:00:00.725) 0:00:01.630 ***** 2025-11-08 13:48:19.685248 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-08 13:48:19.685260 | orchestrator | 2025-11-08 13:48:19.685272 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-08 13:48:19.685283 | orchestrator | Saturday 08 November 2025 13:47:20 +0000 (0:00:01.019) 0:00:02.649 ***** 2025-11-08 13:48:19.685295 | orchestrator | changed: [testbed-manager] 2025-11-08 13:48:19.685307 | orchestrator | 2025-11-08 13:48:19.685319 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-11-08 13:48:19.685331 | orchestrator | Saturday 08 November 2025 13:47:21 +0000 (0:00:01.214) 0:00:03.864 ***** 2025-11-08 13:48:19.685343 | orchestrator | changed: [testbed-manager] 2025-11-08 13:48:19.685355 | orchestrator | 2025-11-08 13:48:19.685367 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-11-08 13:48:19.685379 | orchestrator | Saturday 08 November 2025 13:47:22 +0000 (0:00:00.531) 0:00:04.395 ***** 2025-11-08 13:48:19.685403 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-08 13:48:19.685414 | orchestrator | 2025-11-08 13:48:19.685428 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-11-08 13:48:19.685447 | orchestrator | Saturday 08 November 2025 13:47:23 +0000 (0:00:01.509) 0:00:05.904 ***** 2025-11-08 13:48:19.685465 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-08 13:48:19.685483 | orchestrator | 2025-11-08 13:48:19.685500 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-11-08 13:48:19.685518 | orchestrator | Saturday 08 November 2025 13:47:24 +0000 (0:00:00.733) 0:00:06.638 ***** 2025-11-08 13:48:19.685535 | orchestrator | ok: [testbed-manager] 2025-11-08 13:48:19.685551 | orchestrator | 2025-11-08 13:48:19.685569 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-11-08 13:48:19.685586 | orchestrator | Saturday 08 November 2025 13:47:24 +0000 (0:00:00.406) 0:00:07.044 ***** 2025-11-08 13:48:19.685605 | orchestrator | ok: [testbed-manager] 2025-11-08 13:48:19.685622 | orchestrator | 2025-11-08 13:48:19.685639 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:48:19.685657 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:48:19.685676 | orchestrator | 2025-11-08 13:48:19.685693 | orchestrator | 2025-11-08 13:48:19.685739 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:48:19.685756 | orchestrator | Saturday 08 November 2025 13:47:25 +0000 (0:00:00.267) 0:00:07.312 ***** 2025-11-08 13:48:19.685775 | orchestrator | =============================================================================== 2025-11-08 13:48:19.685794 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.51s 2025-11-08 13:48:19.685810 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.21s 2025-11-08 13:48:19.685828 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.02s 2025-11-08 13:48:19.685868 | orchestrator | Get home directory of operator user ------------------------------------- 0.77s 2025-11-08 13:48:19.685886 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.73s 2025-11-08 13:48:19.685903 | orchestrator | Create .kube directory -------------------------------------------------- 0.73s 2025-11-08 13:48:19.685932 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.53s 2025-11-08 13:48:19.685950 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2025-11-08 13:48:19.685968 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2025-11-08 13:48:19.685987 | orchestrator | 2025-11-08 13:48:19.686005 | orchestrator | 2025-11-08 13:48:19.686102 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-11-08 13:48:19.686121 | orchestrator | 2025-11-08 13:48:19.686137 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-11-08 13:48:19.686153 | orchestrator | Saturday 08 November 2025 13:45:51 +0000 (0:00:00.564) 0:00:00.564 ***** 2025-11-08 13:48:19.686171 | orchestrator | ok: [localhost] => { 2025-11-08 13:48:19.686190 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-11-08 13:48:19.686209 | orchestrator | } 2025-11-08 13:48:19.686228 | orchestrator | 2025-11-08 13:48:19.686247 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-11-08 13:48:19.686265 | orchestrator | Saturday 08 November 2025 13:45:51 +0000 (0:00:00.102) 0:00:00.667 ***** 2025-11-08 13:48:19.686287 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-11-08 13:48:19.686308 | orchestrator | ...ignoring 2025-11-08 13:48:19.686326 | orchestrator | 2025-11-08 13:48:19.686343 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-11-08 13:48:19.686378 | orchestrator | Saturday 08 November 2025 13:45:54 +0000 (0:00:03.406) 0:00:04.073 ***** 2025-11-08 13:48:19.686398 | orchestrator | skipping: [localhost] 2025-11-08 13:48:19.686416 | orchestrator | 2025-11-08 13:48:19.686435 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-11-08 13:48:19.686453 | orchestrator | Saturday 08 November 2025 13:45:54 +0000 (0:00:00.257) 0:00:04.331 ***** 2025-11-08 13:48:19.686474 | orchestrator | ok: [localhost] 2025-11-08 13:48:19.686486 | orchestrator | 2025-11-08 13:48:19.686497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:48:19.686507 | orchestrator | 2025-11-08 13:48:19.686518 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:48:19.686528 | orchestrator | Saturday 08 November 2025 13:45:55 +0000 (0:00:00.308) 0:00:04.639 ***** 2025-11-08 13:48:19.686539 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:48:19.686550 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:48:19.686560 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:48:19.686571 | orchestrator | 2025-11-08 13:48:19.686582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:48:19.686592 | orchestrator | Saturday 08 November 2025 13:45:55 +0000 (0:00:00.438) 0:00:05.078 ***** 2025-11-08 13:48:19.686603 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-11-08 13:48:19.686614 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-11-08 13:48:19.686625 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-11-08 13:48:19.686636 | orchestrator | 2025-11-08 13:48:19.686647 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-11-08 13:48:19.686657 | orchestrator | 2025-11-08 13:48:19.686668 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-08 13:48:19.686679 | orchestrator | Saturday 08 November 2025 13:45:56 +0000 (0:00:01.204) 0:00:06.282 ***** 2025-11-08 13:48:19.686690 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:48:19.686701 | orchestrator | 2025-11-08 13:48:19.686742 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-11-08 13:48:19.686756 | orchestrator | Saturday 08 November 2025 13:45:58 +0000 (0:00:01.423) 0:00:07.705 ***** 2025-11-08 13:48:19.686766 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:48:19.686777 | orchestrator | 2025-11-08 13:48:19.686788 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-11-08 13:48:19.686799 | orchestrator | Saturday 08 November 2025 13:46:00 +0000 (0:00:01.787) 0:00:09.493 ***** 2025-11-08 13:48:19.686809 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:48:19.687083 | orchestrator | 2025-11-08 13:48:19.690012 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-11-08 13:48:19.690077 | orchestrator | Saturday 08 November 2025 13:46:00 +0000 (0:00:00.361) 0:00:09.854 ***** 2025-11-08 13:48:19.690093 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:48:19.690110 | orchestrator | 2025-11-08 13:48:19.690125 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-11-08 13:48:19.690141 | orchestrator | Saturday 08 November 2025 13:46:00 +0000 (0:00:00.419) 0:00:10.274 ***** 2025-11-08 13:48:19.690157 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:48:19.690173 | orchestrator | 2025-11-08 13:48:19.690189 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-11-08 13:48:19.690205 | orchestrator | Saturday 08 November 2025 13:46:01 +0000 (0:00:00.989) 0:00:11.264 ***** 2025-11-08 13:48:19.690221 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:48:19.690238 | orchestrator | 2025-11-08 13:48:19.690254 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-08 13:48:19.690270 | orchestrator | Saturday 08 November 2025 13:46:03 +0000 (0:00:01.352) 0:00:12.616 ***** 2025-11-08 13:48:19.690287 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:48:19.690323 | orchestrator | 2025-11-08 13:48:19.690340 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-11-08 13:48:19.690374 | orchestrator | Saturday 08 November 2025 13:46:04 +0000 (0:00:01.580) 0:00:14.197 ***** 2025-11-08 13:48:19.690393 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:48:19.690410 | orchestrator | 2025-11-08 13:48:19.690427 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-11-08 13:48:19.690445 | orchestrator | Saturday 08 November 2025 13:46:05 +0000 (0:00:00.928) 0:00:15.126 ***** 2025-11-08 13:48:19.690480 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:48:19.690498 | orchestrator | 2025-11-08 13:48:19.690515 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-11-08 13:48:19.690531 | orchestrator | Saturday 08 November 2025 13:46:06 +0000 (0:00:00.490) 0:00:15.616 ***** 2025-11-08 13:48:19.690547 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:48:19.690564 | orchestrator | 2025-11-08 13:48:19.690580 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-11-08 13:48:19.690598 | orchestrator | Saturday 08 November 2025 13:46:07 +0000 (0:00:00.884) 0:00:16.501 ***** 2025-11-08 13:48:19.690621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:48:19.690645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:48:19.690664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:48:19.690693 | orchestrator | 2025-11-08 13:48:19.690733 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-11-08 13:48:19.690751 | orchestrator | Saturday 08 November 2025 13:46:08 +0000 (0:00:01.334) 0:00:17.835 ***** 2025-11-08 13:48:19.690788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:48:19.690807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:48:19.690825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:48:19.690843 | orchestrator | 2025-11-08 13:48:19.690860 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-11-08 13:48:19.690876 | orchestrator | Saturday 08 November 2025 13:46:11 +0000 (0:00:02.628) 0:00:20.464 ***** 2025-11-08 13:48:19.690902 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-08 13:48:19.690919 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-08 13:48:19.690935 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-08 13:48:19.690951 | orchestrator | 2025-11-08 13:48:19.690967 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-11-08 13:48:19.690982 | orchestrator | Saturday 08 November 2025 13:46:14 +0000 (0:00:03.190) 0:00:23.655 ***** 2025-11-08 13:48:19.690997 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-08 13:48:19.691013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-08 13:48:19.691029 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-08 13:48:19.691044 | orchestrator | 2025-11-08 13:48:19.691059 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-11-08 13:48:19.691085 | orchestrator | Saturday 08 November 2025 13:46:17 +0000 (0:00:02.821) 0:00:26.477 ***** 2025-11-08 13:48:19.691102 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-08 13:48:19.691120 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-08 13:48:19.691142 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-08 13:48:19.691159 | orchestrator | 2025-11-08 13:48:19.691176 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-11-08 13:48:19.691192 | orchestrator | Saturday 08 November 2025 13:46:19 +0000 (0:00:02.151) 0:00:28.628 ***** 2025-11-08 13:48:19.691208 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-08 13:48:19.691224 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-08 13:48:19.691239 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-08 13:48:19.691255 | orchestrator | 2025-11-08 13:48:19.691271 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-11-08 13:48:19.691287 | orchestrator | Saturday 08 November 2025 13:46:23 +0000 (0:00:04.360) 0:00:32.988 ***** 2025-11-08 13:48:19.691302 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-08 13:48:19.691318 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-08 13:48:19.691334 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-08 13:48:19.691351 | orchestrator | 2025-11-08 13:48:19.691368 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-11-08 13:48:19.691384 | orchestrator | Saturday 08 November 2025 13:46:25 +0000 (0:00:01.517) 0:00:34.506 ***** 2025-11-08 13:48:19.691401 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-08 13:48:19.691418 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-08 13:48:19.691434 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-08 13:48:19.691449 | orchestrator | 2025-11-08 13:48:19.691465 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-08 13:48:19.691481 | orchestrator | Saturday 08 November 2025 13:46:27 +0000 (0:00:01.895) 0:00:36.401 ***** 2025-11-08 13:48:19.691498 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:48:19.691514 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:48:19.691529 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:48:19.691545 | orchestrator | 2025-11-08 13:48:19.691563 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-11-08 13:48:19.691592 | orchestrator | Saturday 08 November 2025 13:46:27 +0000 (0:00:00.411) 0:00:36.813 ***** 2025-11-08 13:48:19.691611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:48:19.691642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:48:19.691670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:48:19.691688 | orchestrator | 2025-11-08 13:48:19.691704 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-11-08 13:48:19.691800 | orchestrator | Saturday 08 November 2025 13:46:29 +0000 (0:00:01.737) 0:00:38.551 ***** 2025-11-08 13:48:19.691817 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:48:19.691833 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:48:19.691843 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:48:19.691853 | orchestrator | 2025-11-08 13:48:19.691863 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-11-08 13:48:19.691872 | orchestrator | Saturday 08 November 2025 13:46:30 +0000 (0:00:00.999) 0:00:39.550 ***** 2025-11-08 13:48:19.691892 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:48:19.691902 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:48:19.691911 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:48:19.691921 | orchestrator | 2025-11-08 13:48:19.691931 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-11-08 13:48:19.691940 | orchestrator | Saturday 08 November 2025 13:46:38 +0000 (0:00:08.466) 0:00:48.016 ***** 2025-11-08 13:48:19.691950 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:48:19.691959 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:48:19.691969 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:48:19.691978 | orchestrator | 2025-11-08 13:48:19.691988 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-08 13:48:19.691997 | orchestrator | 2025-11-08 13:48:19.692007 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-08 13:48:19.692017 | orchestrator | Saturday 08 November 2025 13:46:39 +0000 (0:00:00.646) 0:00:48.663 ***** 2025-11-08 13:48:19.692026 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:48:19.692036 | orchestrator | 2025-11-08 13:48:19.692046 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-08 13:48:19.692059 | orchestrator | Saturday 08 November 2025 13:46:39 +0000 (0:00:00.668) 0:00:49.332 ***** 2025-11-08 13:48:19.692073 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:48:19.692083 | orchestrator | 2025-11-08 13:48:19.692093 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-08 13:48:19.692102 | orchestrator | Saturday 08 November 2025 13:46:40 +0000 (0:00:00.260) 0:00:49.592 ***** 2025-11-08 13:48:19.692112 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:48:19.692121 | orchestrator | 2025-11-08 13:48:19.692131 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-08 13:48:19.692140 | orchestrator | Saturday 08 November 2025 13:46:47 +0000 (0:00:06.833) 0:00:56.425 ***** 2025-11-08 13:48:19.692150 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:48:19.692159 | orchestrator | 2025-11-08 13:48:19.692169 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-08 13:48:19.692179 | orchestrator | 2025-11-08 13:48:19.692188 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-08 13:48:19.692198 | orchestrator | Saturday 08 November 2025 13:47:39 +0000 (0:00:52.003) 0:01:48.429 ***** 2025-11-08 13:48:19.692207 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:48:19.692217 | orchestrator | 2025-11-08 13:48:19.692226 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-08 13:48:19.692236 | orchestrator | Saturday 08 November 2025 13:47:39 +0000 (0:00:00.625) 0:01:49.055 ***** 2025-11-08 13:48:19.692245 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:48:19.692253 | orchestrator | 2025-11-08 13:48:19.692261 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-08 13:48:19.692269 | orchestrator | Saturday 08 November 2025 13:47:39 +0000 (0:00:00.242) 0:01:49.297 ***** 2025-11-08 13:48:19.692277 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:48:19.692284 | orchestrator | 2025-11-08 13:48:19.692292 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-08 13:48:19.692300 | orchestrator | Saturday 08 November 2025 13:47:41 +0000 (0:00:01.708) 0:01:51.005 ***** 2025-11-08 13:48:19.692308 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:48:19.692316 | orchestrator | 2025-11-08 13:48:19.692323 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-08 13:48:19.692331 | orchestrator | 2025-11-08 13:48:19.692339 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-08 13:48:19.692346 | orchestrator | Saturday 08 November 2025 13:47:56 +0000 (0:00:14.958) 0:02:05.964 ***** 2025-11-08 13:48:19.692354 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:48:19.692362 | orchestrator | 2025-11-08 13:48:19.692377 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-08 13:48:19.692385 | orchestrator | Saturday 08 November 2025 13:47:57 +0000 (0:00:00.644) 0:02:06.608 ***** 2025-11-08 13:48:19.692398 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:48:19.692406 | orchestrator | 2025-11-08 13:48:19.692414 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-08 13:48:19.692427 | orchestrator | Saturday 08 November 2025 13:47:57 +0000 (0:00:00.223) 0:02:06.831 ***** 2025-11-08 13:48:19.692435 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:48:19.692443 | orchestrator | 2025-11-08 13:48:19.692450 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-08 13:48:19.692458 | orchestrator | Saturday 08 November 2025 13:48:04 +0000 (0:00:06.805) 0:02:13.636 ***** 2025-11-08 13:48:19.692466 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:48:19.692474 | orchestrator | 2025-11-08 13:48:19.692482 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-11-08 13:48:19.692490 | orchestrator | 2025-11-08 13:48:19.692497 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-11-08 13:48:19.692505 | orchestrator | Saturday 08 November 2025 13:48:16 +0000 (0:00:12.014) 0:02:25.651 ***** 2025-11-08 13:48:19.692513 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:48:19.692521 | orchestrator | 2025-11-08 13:48:19.692534 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-11-08 13:48:19.692546 | orchestrator | Saturday 08 November 2025 13:48:16 +0000 (0:00:00.430) 0:02:26.081 ***** 2025-11-08 13:48:19.692561 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-08 13:48:19.692580 | orchestrator | enable_outward_rabbitmq_True 2025-11-08 13:48:19.692593 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-08 13:48:19.692605 | orchestrator | outward_rabbitmq_restart 2025-11-08 13:48:19.692619 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:48:19.692633 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:48:19.692646 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:48:19.692655 | orchestrator | 2025-11-08 13:48:19.692662 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-11-08 13:48:19.692670 | orchestrator | skipping: no hosts matched 2025-11-08 13:48:19.692678 | orchestrator | 2025-11-08 13:48:19.692686 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-11-08 13:48:19.692694 | orchestrator | skipping: no hosts matched 2025-11-08 13:48:19.692701 | orchestrator | 2025-11-08 13:48:19.692733 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-11-08 13:48:19.692743 | orchestrator | skipping: no hosts matched 2025-11-08 13:48:19.692751 | orchestrator | 2025-11-08 13:48:19.692758 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:48:19.692767 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-11-08 13:48:19.692776 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-08 13:48:19.692784 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:48:19.692792 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:48:19.692800 | orchestrator | 2025-11-08 13:48:19.692808 | orchestrator | 2025-11-08 13:48:19.692816 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:48:19.692824 | orchestrator | Saturday 08 November 2025 13:48:19 +0000 (0:00:02.481) 0:02:28.563 ***** 2025-11-08 13:48:19.692832 | orchestrator | =============================================================================== 2025-11-08 13:48:19.692839 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.98s 2025-11-08 13:48:19.692847 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.35s 2025-11-08 13:48:19.692873 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.47s 2025-11-08 13:48:19.692881 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 4.36s 2025-11-08 13:48:19.692889 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.41s 2025-11-08 13:48:19.692896 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.19s 2025-11-08 13:48:19.692904 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.82s 2025-11-08 13:48:19.692912 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.63s 2025-11-08 13:48:19.692919 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.48s 2025-11-08 13:48:19.692927 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.15s 2025-11-08 13:48:19.692935 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.94s 2025-11-08 13:48:19.692943 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.90s 2025-11-08 13:48:19.692950 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.79s 2025-11-08 13:48:19.692958 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.74s 2025-11-08 13:48:19.692966 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.58s 2025-11-08 13:48:19.692973 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.52s 2025-11-08 13:48:19.692981 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.42s 2025-11-08 13:48:19.692995 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.35s 2025-11-08 13:48:19.693003 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.33s 2025-11-08 13:48:19.693011 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.20s 2025-11-08 13:48:19.693024 | orchestrator | 2025-11-08 13:48:19 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:22.730333 | orchestrator | 2025-11-08 13:48:22 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:22.731339 | orchestrator | 2025-11-08 13:48:22 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:22.736285 | orchestrator | 2025-11-08 13:48:22 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:22.736802 | orchestrator | 2025-11-08 13:48:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:25.780572 | orchestrator | 2025-11-08 13:48:25 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:25.782398 | orchestrator | 2025-11-08 13:48:25 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:25.783973 | orchestrator | 2025-11-08 13:48:25 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:25.784179 | orchestrator | 2025-11-08 13:48:25 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:28.824694 | orchestrator | 2025-11-08 13:48:28 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:28.826593 | orchestrator | 2025-11-08 13:48:28 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:28.827536 | orchestrator | 2025-11-08 13:48:28 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:28.827644 | orchestrator | 2025-11-08 13:48:28 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:31.863590 | orchestrator | 2025-11-08 13:48:31 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:31.864087 | orchestrator | 2025-11-08 13:48:31 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:31.865082 | orchestrator | 2025-11-08 13:48:31 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:31.865112 | orchestrator | 2025-11-08 13:48:31 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:34.895403 | orchestrator | 2025-11-08 13:48:34 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:34.897066 | orchestrator | 2025-11-08 13:48:34 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:34.899139 | orchestrator | 2025-11-08 13:48:34 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:34.899364 | orchestrator | 2025-11-08 13:48:34 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:37.936596 | orchestrator | 2025-11-08 13:48:37 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:37.938393 | orchestrator | 2025-11-08 13:48:37 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:37.940489 | orchestrator | 2025-11-08 13:48:37 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:37.940554 | orchestrator | 2025-11-08 13:48:37 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:40.974398 | orchestrator | 2025-11-08 13:48:40 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:40.974554 | orchestrator | 2025-11-08 13:48:40 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:40.976616 | orchestrator | 2025-11-08 13:48:40 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:40.976759 | orchestrator | 2025-11-08 13:48:40 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:44.018846 | orchestrator | 2025-11-08 13:48:44 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:44.018985 | orchestrator | 2025-11-08 13:48:44 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:44.019893 | orchestrator | 2025-11-08 13:48:44 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:44.019987 | orchestrator | 2025-11-08 13:48:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:47.062589 | orchestrator | 2025-11-08 13:48:47 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:47.064393 | orchestrator | 2025-11-08 13:48:47 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:47.066920 | orchestrator | 2025-11-08 13:48:47 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:47.066962 | orchestrator | 2025-11-08 13:48:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:50.123815 | orchestrator | 2025-11-08 13:48:50 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:50.125882 | orchestrator | 2025-11-08 13:48:50 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:50.127259 | orchestrator | 2025-11-08 13:48:50 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:50.127308 | orchestrator | 2025-11-08 13:48:50 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:53.167537 | orchestrator | 2025-11-08 13:48:53 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:53.170510 | orchestrator | 2025-11-08 13:48:53 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:53.173447 | orchestrator | 2025-11-08 13:48:53 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:53.173525 | orchestrator | 2025-11-08 13:48:53 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:56.221852 | orchestrator | 2025-11-08 13:48:56 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:56.221950 | orchestrator | 2025-11-08 13:48:56 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:56.223999 | orchestrator | 2025-11-08 13:48:56 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:56.224052 | orchestrator | 2025-11-08 13:48:56 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:48:59.257504 | orchestrator | 2025-11-08 13:48:59 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:48:59.258004 | orchestrator | 2025-11-08 13:48:59 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:48:59.258945 | orchestrator | 2025-11-08 13:48:59 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:48:59.259027 | orchestrator | 2025-11-08 13:48:59 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:02.305257 | orchestrator | 2025-11-08 13:49:02 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:49:02.306385 | orchestrator | 2025-11-08 13:49:02 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:02.307211 | orchestrator | 2025-11-08 13:49:02 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:02.307247 | orchestrator | 2025-11-08 13:49:02 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:05.348625 | orchestrator | 2025-11-08 13:49:05 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:49:05.349230 | orchestrator | 2025-11-08 13:49:05 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:05.350322 | orchestrator | 2025-11-08 13:49:05 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:05.350346 | orchestrator | 2025-11-08 13:49:05 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:08.391153 | orchestrator | 2025-11-08 13:49:08 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state STARTED 2025-11-08 13:49:08.392655 | orchestrator | 2025-11-08 13:49:08 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:08.394187 | orchestrator | 2025-11-08 13:49:08 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:08.394326 | orchestrator | 2025-11-08 13:49:08 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:11.432473 | orchestrator | 2025-11-08 13:49:11 | INFO  | Task f5d8365b-5b05-40df-8872-bc5b8623ac59 is in state SUCCESS 2025-11-08 13:49:11.433440 | orchestrator | 2025-11-08 13:49:11.433475 | orchestrator | 2025-11-08 13:49:11.433488 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:49:11.433587 | orchestrator | 2025-11-08 13:49:11.433669 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:49:11.434002 | orchestrator | Saturday 08 November 2025 13:46:33 +0000 (0:00:00.170) 0:00:00.170 ***** 2025-11-08 13:49:11.434061 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:49:11.434077 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:49:11.434088 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:49:11.434099 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.434110 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.434121 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.434132 | orchestrator | 2025-11-08 13:49:11.434165 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:49:11.434177 | orchestrator | Saturday 08 November 2025 13:46:34 +0000 (0:00:00.774) 0:00:00.944 ***** 2025-11-08 13:49:11.434188 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-11-08 13:49:11.434199 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-11-08 13:49:11.434221 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-11-08 13:49:11.434232 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-11-08 13:49:11.434243 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-11-08 13:49:11.434254 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-11-08 13:49:11.434265 | orchestrator | 2025-11-08 13:49:11.434276 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-11-08 13:49:11.434287 | orchestrator | 2025-11-08 13:49:11.434298 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-11-08 13:49:11.434309 | orchestrator | Saturday 08 November 2025 13:46:35 +0000 (0:00:00.872) 0:00:01.817 ***** 2025-11-08 13:49:11.434321 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:49:11.434333 | orchestrator | 2025-11-08 13:49:11.434344 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-11-08 13:49:11.434355 | orchestrator | Saturday 08 November 2025 13:46:36 +0000 (0:00:01.360) 0:00:03.177 ***** 2025-11-08 13:49:11.434368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434447 | orchestrator | 2025-11-08 13:49:11.434472 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-11-08 13:49:11.434484 | orchestrator | Saturday 08 November 2025 13:46:38 +0000 (0:00:01.913) 0:00:05.090 ***** 2025-11-08 13:49:11.434495 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434637 | orchestrator | 2025-11-08 13:49:11.434650 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-11-08 13:49:11.434663 | orchestrator | Saturday 08 November 2025 13:46:40 +0000 (0:00:01.820) 0:00:06.910 ***** 2025-11-08 13:49:11.434676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434820 | orchestrator | 2025-11-08 13:49:11.434833 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-11-08 13:49:11.434845 | orchestrator | Saturday 08 November 2025 13:46:42 +0000 (0:00:01.546) 0:00:08.457 ***** 2025-11-08 13:49:11.434857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.434966 | orchestrator | 2025-11-08 13:49:11.434993 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-11-08 13:49:11.435005 | orchestrator | Saturday 08 November 2025 13:46:44 +0000 (0:00:02.236) 0:00:10.694 ***** 2025-11-08 13:49:11.435016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.435032 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.435044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.435055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.435066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.435077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.435088 | orchestrator | 2025-11-08 13:49:11.435099 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-11-08 13:49:11.435117 | orchestrator | Saturday 08 November 2025 13:46:46 +0000 (0:00:02.006) 0:00:12.701 ***** 2025-11-08 13:49:11.435128 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:49:11.435140 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:49:11.435150 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:49:11.435161 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:49:11.435172 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:49:11.435182 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:49:11.435193 | orchestrator | 2025-11-08 13:49:11.435204 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-11-08 13:49:11.435215 | orchestrator | Saturday 08 November 2025 13:46:49 +0000 (0:00:02.947) 0:00:15.648 ***** 2025-11-08 13:49:11.435225 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-11-08 13:49:11.435236 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-11-08 13:49:11.435247 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-11-08 13:49:11.435258 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-11-08 13:49:11.435268 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-11-08 13:49:11.435279 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-11-08 13:49:11.435289 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-08 13:49:11.435300 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-08 13:49:11.435317 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-08 13:49:11.435328 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-08 13:49:11.435338 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-08 13:49:11.435349 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-08 13:49:11.435360 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-08 13:49:11.435372 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-08 13:49:11.435388 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-08 13:49:11.435400 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-08 13:49:11.435411 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-08 13:49:11.435422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-08 13:49:11.435432 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-08 13:49:11.435444 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-08 13:49:11.435455 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-08 13:49:11.435465 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-08 13:49:11.435476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-08 13:49:11.435487 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-08 13:49:11.435505 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-08 13:49:11.435515 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-08 13:49:11.435526 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-08 13:49:11.435537 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-08 13:49:11.435547 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-08 13:49:11.435558 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-08 13:49:11.435569 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-08 13:49:11.435580 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-08 13:49:11.435591 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-08 13:49:11.435601 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-08 13:49:11.435612 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-08 13:49:11.435622 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-08 13:49:11.435633 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-08 13:49:11.435644 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-08 13:49:11.435655 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-08 13:49:11.435666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-08 13:49:11.435677 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-08 13:49:11.435688 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-11-08 13:49:11.435728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-08 13:49:11.435747 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-11-08 13:49:11.435775 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-11-08 13:49:11.435792 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-11-08 13:49:11.435804 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-11-08 13:49:11.435815 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-08 13:49:11.435826 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-11-08 13:49:11.435836 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-08 13:49:11.435852 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-08 13:49:11.435864 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-08 13:49:11.435874 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-08 13:49:11.435892 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-08 13:49:11.435903 | orchestrator | 2025-11-08 13:49:11.435914 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-08 13:49:11.435925 | orchestrator | Saturday 08 November 2025 13:47:09 +0000 (0:00:20.535) 0:00:36.184 ***** 2025-11-08 13:49:11.435936 | orchestrator | 2025-11-08 13:49:11.435947 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-08 13:49:11.435957 | orchestrator | Saturday 08 November 2025 13:47:10 +0000 (0:00:00.077) 0:00:36.261 ***** 2025-11-08 13:49:11.435968 | orchestrator | 2025-11-08 13:49:11.435978 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-08 13:49:11.435989 | orchestrator | Saturday 08 November 2025 13:47:10 +0000 (0:00:00.081) 0:00:36.343 ***** 2025-11-08 13:49:11.435999 | orchestrator | 2025-11-08 13:49:11.436010 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-08 13:49:11.436020 | orchestrator | Saturday 08 November 2025 13:47:10 +0000 (0:00:00.150) 0:00:36.498 ***** 2025-11-08 13:49:11.436031 | orchestrator | 2025-11-08 13:49:11.436042 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-08 13:49:11.436052 | orchestrator | Saturday 08 November 2025 13:47:10 +0000 (0:00:00.167) 0:00:36.666 ***** 2025-11-08 13:49:11.436063 | orchestrator | 2025-11-08 13:49:11.436073 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-08 13:49:11.436084 | orchestrator | Saturday 08 November 2025 13:47:10 +0000 (0:00:00.239) 0:00:36.905 ***** 2025-11-08 13:49:11.436095 | orchestrator | 2025-11-08 13:49:11.436105 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-11-08 13:49:11.436116 | orchestrator | Saturday 08 November 2025 13:47:10 +0000 (0:00:00.158) 0:00:37.064 ***** 2025-11-08 13:49:11.436127 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.436138 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:49:11.436148 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:49:11.436159 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.436170 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:49:11.436180 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.436191 | orchestrator | 2025-11-08 13:49:11.436201 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-11-08 13:49:11.436212 | orchestrator | Saturday 08 November 2025 13:47:12 +0000 (0:00:02.149) 0:00:39.214 ***** 2025-11-08 13:49:11.436223 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:49:11.436234 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:49:11.436244 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:49:11.436255 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:49:11.436265 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:49:11.436276 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:49:11.436287 | orchestrator | 2025-11-08 13:49:11.436297 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-11-08 13:49:11.436308 | orchestrator | 2025-11-08 13:49:11.436319 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-08 13:49:11.436330 | orchestrator | Saturday 08 November 2025 13:47:47 +0000 (0:00:34.496) 0:01:13.710 ***** 2025-11-08 13:49:11.436340 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:49:11.436351 | orchestrator | 2025-11-08 13:49:11.436362 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-08 13:49:11.436373 | orchestrator | Saturday 08 November 2025 13:47:48 +0000 (0:00:00.788) 0:01:14.499 ***** 2025-11-08 13:49:11.436383 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:49:11.436395 | orchestrator | 2025-11-08 13:49:11.436405 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-11-08 13:49:11.436422 | orchestrator | Saturday 08 November 2025 13:47:49 +0000 (0:00:00.758) 0:01:15.257 ***** 2025-11-08 13:49:11.436433 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.436443 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.436454 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.436465 | orchestrator | 2025-11-08 13:49:11.436476 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-11-08 13:49:11.436486 | orchestrator | Saturday 08 November 2025 13:47:50 +0000 (0:00:00.997) 0:01:16.255 ***** 2025-11-08 13:49:11.436497 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.436508 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.436518 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.436534 | orchestrator | 2025-11-08 13:49:11.436545 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-11-08 13:49:11.436556 | orchestrator | Saturday 08 November 2025 13:47:50 +0000 (0:00:00.437) 0:01:16.692 ***** 2025-11-08 13:49:11.436567 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.436577 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.436588 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.436598 | orchestrator | 2025-11-08 13:49:11.436609 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-11-08 13:49:11.436620 | orchestrator | Saturday 08 November 2025 13:47:50 +0000 (0:00:00.418) 0:01:17.111 ***** 2025-11-08 13:49:11.436630 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.436828 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.436841 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.436852 | orchestrator | 2025-11-08 13:49:11.436863 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-11-08 13:49:11.436874 | orchestrator | Saturday 08 November 2025 13:47:51 +0000 (0:00:00.417) 0:01:17.529 ***** 2025-11-08 13:49:11.436885 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.436902 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.436913 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.436924 | orchestrator | 2025-11-08 13:49:11.436934 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-11-08 13:49:11.436946 | orchestrator | Saturday 08 November 2025 13:47:52 +0000 (0:00:00.732) 0:01:18.262 ***** 2025-11-08 13:49:11.436956 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.436967 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.436978 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.436988 | orchestrator | 2025-11-08 13:49:11.436999 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-11-08 13:49:11.437010 | orchestrator | Saturday 08 November 2025 13:47:52 +0000 (0:00:00.368) 0:01:18.631 ***** 2025-11-08 13:49:11.437021 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437031 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437042 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437053 | orchestrator | 2025-11-08 13:49:11.437064 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-11-08 13:49:11.437074 | orchestrator | Saturday 08 November 2025 13:47:52 +0000 (0:00:00.316) 0:01:18.947 ***** 2025-11-08 13:49:11.437085 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437096 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437107 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437118 | orchestrator | 2025-11-08 13:49:11.437128 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-11-08 13:49:11.437139 | orchestrator | Saturday 08 November 2025 13:47:53 +0000 (0:00:00.299) 0:01:19.246 ***** 2025-11-08 13:49:11.437150 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437160 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437171 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437182 | orchestrator | 2025-11-08 13:49:11.437193 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-11-08 13:49:11.437203 | orchestrator | Saturday 08 November 2025 13:47:53 +0000 (0:00:00.559) 0:01:19.806 ***** 2025-11-08 13:49:11.437224 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437233 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437243 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437252 | orchestrator | 2025-11-08 13:49:11.437262 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-11-08 13:49:11.437272 | orchestrator | Saturday 08 November 2025 13:47:53 +0000 (0:00:00.332) 0:01:20.139 ***** 2025-11-08 13:49:11.437281 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437291 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437300 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437310 | orchestrator | 2025-11-08 13:49:11.437319 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-11-08 13:49:11.437329 | orchestrator | Saturday 08 November 2025 13:47:54 +0000 (0:00:00.370) 0:01:20.509 ***** 2025-11-08 13:49:11.437338 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437348 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437357 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437367 | orchestrator | 2025-11-08 13:49:11.437376 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-11-08 13:49:11.437386 | orchestrator | Saturday 08 November 2025 13:47:54 +0000 (0:00:00.303) 0:01:20.813 ***** 2025-11-08 13:49:11.437395 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437405 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437414 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437424 | orchestrator | 2025-11-08 13:49:11.437433 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-11-08 13:49:11.437443 | orchestrator | Saturday 08 November 2025 13:47:54 +0000 (0:00:00.301) 0:01:21.114 ***** 2025-11-08 13:49:11.437453 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437462 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437472 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437481 | orchestrator | 2025-11-08 13:49:11.437491 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-11-08 13:49:11.437500 | orchestrator | Saturday 08 November 2025 13:47:55 +0000 (0:00:00.643) 0:01:21.757 ***** 2025-11-08 13:49:11.437510 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437519 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437529 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437538 | orchestrator | 2025-11-08 13:49:11.437548 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-11-08 13:49:11.437557 | orchestrator | Saturday 08 November 2025 13:47:55 +0000 (0:00:00.435) 0:01:22.193 ***** 2025-11-08 13:49:11.437567 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437576 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437585 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437595 | orchestrator | 2025-11-08 13:49:11.437604 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-11-08 13:49:11.437614 | orchestrator | Saturday 08 November 2025 13:47:56 +0000 (0:00:00.340) 0:01:22.534 ***** 2025-11-08 13:49:11.437624 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437633 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437650 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437660 | orchestrator | 2025-11-08 13:49:11.437669 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-08 13:49:11.437679 | orchestrator | Saturday 08 November 2025 13:47:56 +0000 (0:00:00.312) 0:01:22.846 ***** 2025-11-08 13:49:11.437689 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:49:11.437721 | orchestrator | 2025-11-08 13:49:11.437731 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-11-08 13:49:11.437741 | orchestrator | Saturday 08 November 2025 13:47:57 +0000 (0:00:00.812) 0:01:23.659 ***** 2025-11-08 13:49:11.437751 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.437767 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.437777 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.437786 | orchestrator | 2025-11-08 13:49:11.437796 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-11-08 13:49:11.437806 | orchestrator | Saturday 08 November 2025 13:47:57 +0000 (0:00:00.451) 0:01:24.111 ***** 2025-11-08 13:49:11.437824 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.437833 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.437843 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.437852 | orchestrator | 2025-11-08 13:49:11.437862 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-11-08 13:49:11.437872 | orchestrator | Saturday 08 November 2025 13:47:58 +0000 (0:00:00.453) 0:01:24.564 ***** 2025-11-08 13:49:11.437881 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437891 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437900 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437909 | orchestrator | 2025-11-08 13:49:11.437919 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-11-08 13:49:11.437928 | orchestrator | Saturday 08 November 2025 13:47:58 +0000 (0:00:00.576) 0:01:25.141 ***** 2025-11-08 13:49:11.437938 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.437947 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.437957 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.437966 | orchestrator | 2025-11-08 13:49:11.437976 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-11-08 13:49:11.437985 | orchestrator | Saturday 08 November 2025 13:47:59 +0000 (0:00:00.457) 0:01:25.598 ***** 2025-11-08 13:49:11.438059 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.438072 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.438081 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.438091 | orchestrator | 2025-11-08 13:49:11.438100 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-11-08 13:49:11.438111 | orchestrator | Saturday 08 November 2025 13:47:59 +0000 (0:00:00.460) 0:01:26.059 ***** 2025-11-08 13:49:11.438120 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.438130 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.438139 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.438149 | orchestrator | 2025-11-08 13:49:11.438159 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-11-08 13:49:11.438168 | orchestrator | Saturday 08 November 2025 13:48:00 +0000 (0:00:00.410) 0:01:26.469 ***** 2025-11-08 13:49:11.438178 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.438188 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.438197 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.438207 | orchestrator | 2025-11-08 13:49:11.438216 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-11-08 13:49:11.438226 | orchestrator | Saturday 08 November 2025 13:48:00 +0000 (0:00:00.550) 0:01:27.020 ***** 2025-11-08 13:49:11.438236 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.438341 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.438352 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.438361 | orchestrator | 2025-11-08 13:49:11.438371 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-11-08 13:49:11.438380 | orchestrator | Saturday 08 November 2025 13:48:01 +0000 (0:00:00.358) 0:01:27.378 ***** 2025-11-08 13:49:11.438391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438507 | orchestrator | 2025-11-08 13:49:11.438517 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-11-08 13:49:11.438527 | orchestrator | Saturday 08 November 2025 13:48:02 +0000 (0:00:01.543) 0:01:28.922 ***** 2025-11-08 13:49:11.438537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438642 | orchestrator | 2025-11-08 13:49:11.438652 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-11-08 13:49:11.438662 | orchestrator | Saturday 08 November 2025 13:48:07 +0000 (0:00:04.522) 0:01:33.444 ***** 2025-11-08 13:49:11.438672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.438882 | orchestrator | 2025-11-08 13:49:11.438891 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-08 13:49:11.438901 | orchestrator | Saturday 08 November 2025 13:48:09 +0000 (0:00:02.333) 0:01:35.778 ***** 2025-11-08 13:49:11.438911 | orchestrator | 2025-11-08 13:49:11.438921 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-08 13:49:11.438930 | orchestrator | Saturday 08 November 2025 13:48:09 +0000 (0:00:00.064) 0:01:35.843 ***** 2025-11-08 13:49:11.438940 | orchestrator | 2025-11-08 13:49:11.438949 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-08 13:49:11.438959 | orchestrator | Saturday 08 November 2025 13:48:09 +0000 (0:00:00.058) 0:01:35.901 ***** 2025-11-08 13:49:11.438968 | orchestrator | 2025-11-08 13:49:11.438978 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-11-08 13:49:11.438987 | orchestrator | Saturday 08 November 2025 13:48:09 +0000 (0:00:00.065) 0:01:35.967 ***** 2025-11-08 13:49:11.439004 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:49:11.439013 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:49:11.439023 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:49:11.439032 | orchestrator | 2025-11-08 13:49:11.439042 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-11-08 13:49:11.439051 | orchestrator | Saturday 08 November 2025 13:48:16 +0000 (0:00:06.809) 0:01:42.777 ***** 2025-11-08 13:49:11.439061 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:49:11.439070 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:49:11.439080 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:49:11.439089 | orchestrator | 2025-11-08 13:49:11.439099 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-11-08 13:49:11.439108 | orchestrator | Saturday 08 November 2025 13:48:24 +0000 (0:00:07.509) 0:01:50.286 ***** 2025-11-08 13:49:11.439118 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:49:11.439127 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:49:11.439136 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:49:11.439146 | orchestrator | 2025-11-08 13:49:11.439156 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-11-08 13:49:11.439165 | orchestrator | Saturday 08 November 2025 13:48:31 +0000 (0:00:07.206) 0:01:57.493 ***** 2025-11-08 13:49:11.439174 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.439184 | orchestrator | 2025-11-08 13:49:11.439194 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-11-08 13:49:11.439203 | orchestrator | Saturday 08 November 2025 13:48:31 +0000 (0:00:00.339) 0:01:57.833 ***** 2025-11-08 13:49:11.439213 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.439223 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.439232 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.439242 | orchestrator | 2025-11-08 13:49:11.439252 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-11-08 13:49:11.439261 | orchestrator | Saturday 08 November 2025 13:48:32 +0000 (0:00:00.821) 0:01:58.655 ***** 2025-11-08 13:49:11.439271 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.439280 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.439289 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:49:11.439299 | orchestrator | 2025-11-08 13:49:11.439309 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-11-08 13:49:11.439318 | orchestrator | Saturday 08 November 2025 13:48:33 +0000 (0:00:00.667) 0:01:59.322 ***** 2025-11-08 13:49:11.439327 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.439337 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.439346 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.439356 | orchestrator | 2025-11-08 13:49:11.439365 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-11-08 13:49:11.439375 | orchestrator | Saturday 08 November 2025 13:48:33 +0000 (0:00:00.849) 0:02:00.172 ***** 2025-11-08 13:49:11.439384 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.439394 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.439403 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:49:11.439413 | orchestrator | 2025-11-08 13:49:11.439422 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-11-08 13:49:11.439432 | orchestrator | Saturday 08 November 2025 13:48:34 +0000 (0:00:00.654) 0:02:00.827 ***** 2025-11-08 13:49:11.439441 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.439451 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.439467 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.439477 | orchestrator | 2025-11-08 13:49:11.439486 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-11-08 13:49:11.439498 | orchestrator | Saturday 08 November 2025 13:48:35 +0000 (0:00:01.065) 0:02:01.892 ***** 2025-11-08 13:49:11.439509 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.439520 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.439530 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.439541 | orchestrator | 2025-11-08 13:49:11.439558 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-11-08 13:49:11.439569 | orchestrator | Saturday 08 November 2025 13:48:36 +0000 (0:00:00.816) 0:02:02.708 ***** 2025-11-08 13:49:11.439580 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.439591 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.439602 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.439611 | orchestrator | 2025-11-08 13:49:11.439621 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-11-08 13:49:11.439630 | orchestrator | Saturday 08 November 2025 13:48:36 +0000 (0:00:00.319) 0:02:03.027 ***** 2025-11-08 13:49:11.439644 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439655 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439665 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439675 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439685 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439719 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439731 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439741 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439757 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439773 | orchestrator | 2025-11-08 13:49:11.439783 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-11-08 13:49:11.439793 | orchestrator | Saturday 08 November 2025 13:48:38 +0000 (0:00:01.446) 0:02:04.474 ***** 2025-11-08 13:49:11.439803 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439817 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439827 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439837 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439868 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439903 | orchestrator | 2025-11-08 13:49:11.439913 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-11-08 13:49:11.439922 | orchestrator | Saturday 08 November 2025 13:48:42 +0000 (0:00:04.363) 0:02:08.837 ***** 2025-11-08 13:49:11.439937 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439947 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439961 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.439992 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.440002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.440011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.440021 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 13:49:11.440036 | orchestrator | 2025-11-08 13:49:11.440046 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-08 13:49:11.440056 | orchestrator | Saturday 08 November 2025 13:48:45 +0000 (0:00:02.984) 0:02:11.822 ***** 2025-11-08 13:49:11.440065 | orchestrator | 2025-11-08 13:49:11.440075 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-08 13:49:11.440084 | orchestrator | Saturday 08 November 2025 13:48:45 +0000 (0:00:00.078) 0:02:11.900 ***** 2025-11-08 13:49:11.440094 | orchestrator | 2025-11-08 13:49:11.440103 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-08 13:49:11.440113 | orchestrator | Saturday 08 November 2025 13:48:45 +0000 (0:00:00.067) 0:02:11.968 ***** 2025-11-08 13:49:11.440123 | orchestrator | 2025-11-08 13:49:11.440132 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-11-08 13:49:11.440142 | orchestrator | Saturday 08 November 2025 13:48:45 +0000 (0:00:00.064) 0:02:12.033 ***** 2025-11-08 13:49:11.440151 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:49:11.440161 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:49:11.440170 | orchestrator | 2025-11-08 13:49:11.440185 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-11-08 13:49:11.440195 | orchestrator | Saturday 08 November 2025 13:48:52 +0000 (0:00:06.280) 0:02:18.313 ***** 2025-11-08 13:49:11.440204 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:49:11.440214 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:49:11.440223 | orchestrator | 2025-11-08 13:49:11.440233 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-11-08 13:49:11.440242 | orchestrator | Saturday 08 November 2025 13:48:58 +0000 (0:00:06.234) 0:02:24.548 ***** 2025-11-08 13:49:11.440252 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:49:11.440261 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:49:11.440271 | orchestrator | 2025-11-08 13:49:11.440280 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-11-08 13:49:11.440290 | orchestrator | Saturday 08 November 2025 13:49:05 +0000 (0:00:06.697) 0:02:31.245 ***** 2025-11-08 13:49:11.440299 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:49:11.440309 | orchestrator | 2025-11-08 13:49:11.440319 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-11-08 13:49:11.440335 | orchestrator | Saturday 08 November 2025 13:49:05 +0000 (0:00:00.168) 0:02:31.414 ***** 2025-11-08 13:49:11.440345 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.440355 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.440364 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.440374 | orchestrator | 2025-11-08 13:49:11.440383 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-11-08 13:49:11.440393 | orchestrator | Saturday 08 November 2025 13:49:05 +0000 (0:00:00.788) 0:02:32.202 ***** 2025-11-08 13:49:11.440402 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.440412 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.440421 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:49:11.440431 | orchestrator | 2025-11-08 13:49:11.440441 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-11-08 13:49:11.440450 | orchestrator | Saturday 08 November 2025 13:49:06 +0000 (0:00:00.639) 0:02:32.841 ***** 2025-11-08 13:49:11.440460 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.440469 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.440479 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.440488 | orchestrator | 2025-11-08 13:49:11.440498 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-11-08 13:49:11.440507 | orchestrator | Saturday 08 November 2025 13:49:07 +0000 (0:00:00.859) 0:02:33.701 ***** 2025-11-08 13:49:11.440517 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:49:11.440526 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:49:11.440536 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:49:11.440551 | orchestrator | 2025-11-08 13:49:11.440560 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-11-08 13:49:11.440570 | orchestrator | Saturday 08 November 2025 13:49:08 +0000 (0:00:00.642) 0:02:34.343 ***** 2025-11-08 13:49:11.440580 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.440589 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.440599 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.440608 | orchestrator | 2025-11-08 13:49:11.440618 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-11-08 13:49:11.440627 | orchestrator | Saturday 08 November 2025 13:49:08 +0000 (0:00:00.750) 0:02:35.094 ***** 2025-11-08 13:49:11.440637 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:49:11.440646 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:49:11.440656 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:49:11.440665 | orchestrator | 2025-11-08 13:49:11.440675 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:49:11.440685 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-11-08 13:49:11.440726 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-08 13:49:11.440738 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-08 13:49:11.440747 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:49:11.440757 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:49:11.440767 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:49:11.440777 | orchestrator | 2025-11-08 13:49:11.440786 | orchestrator | 2025-11-08 13:49:11.440796 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:49:11.440805 | orchestrator | Saturday 08 November 2025 13:49:09 +0000 (0:00:00.871) 0:02:35.965 ***** 2025-11-08 13:49:11.440815 | orchestrator | =============================================================================== 2025-11-08 13:49:11.440824 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.50s 2025-11-08 13:49:11.440834 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.54s 2025-11-08 13:49:11.440843 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.90s 2025-11-08 13:49:11.440853 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.74s 2025-11-08 13:49:11.440862 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.09s 2025-11-08 13:49:11.440872 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.52s 2025-11-08 13:49:11.440881 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.36s 2025-11-08 13:49:11.440896 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.98s 2025-11-08 13:49:11.440906 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.95s 2025-11-08 13:49:11.440916 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.33s 2025-11-08 13:49:11.440925 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.24s 2025-11-08 13:49:11.440935 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.15s 2025-11-08 13:49:11.440945 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.01s 2025-11-08 13:49:11.440954 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.91s 2025-11-08 13:49:11.440964 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.82s 2025-11-08 13:49:11.440980 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.55s 2025-11-08 13:49:11.440993 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.54s 2025-11-08 13:49:11.441003 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2025-11-08 13:49:11.441013 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.36s 2025-11-08 13:49:11.441022 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.07s 2025-11-08 13:49:11.441032 | orchestrator | 2025-11-08 13:49:11 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:11.441041 | orchestrator | 2025-11-08 13:49:11 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:11.441051 | orchestrator | 2025-11-08 13:49:11 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:14.482605 | orchestrator | 2025-11-08 13:49:14 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:14.482788 | orchestrator | 2025-11-08 13:49:14 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:14.483390 | orchestrator | 2025-11-08 13:49:14 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:17.522371 | orchestrator | 2025-11-08 13:49:17 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:17.522500 | orchestrator | 2025-11-08 13:49:17 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:17.522515 | orchestrator | 2025-11-08 13:49:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:20.561508 | orchestrator | 2025-11-08 13:49:20 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:20.562895 | orchestrator | 2025-11-08 13:49:20 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:20.562936 | orchestrator | 2025-11-08 13:49:20 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:23.602382 | orchestrator | 2025-11-08 13:49:23 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:23.604140 | orchestrator | 2025-11-08 13:49:23 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:23.604181 | orchestrator | 2025-11-08 13:49:23 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:26.651023 | orchestrator | 2025-11-08 13:49:26 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:26.652020 | orchestrator | 2025-11-08 13:49:26 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:26.652062 | orchestrator | 2025-11-08 13:49:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:29.700057 | orchestrator | 2025-11-08 13:49:29 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:29.703653 | orchestrator | 2025-11-08 13:49:29 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:29.705409 | orchestrator | 2025-11-08 13:49:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:32.742558 | orchestrator | 2025-11-08 13:49:32 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:32.742999 | orchestrator | 2025-11-08 13:49:32 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:32.743029 | orchestrator | 2025-11-08 13:49:32 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:35.787447 | orchestrator | 2025-11-08 13:49:35 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:35.789177 | orchestrator | 2025-11-08 13:49:35 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:35.789222 | orchestrator | 2025-11-08 13:49:35 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:38.828405 | orchestrator | 2025-11-08 13:49:38 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:38.831003 | orchestrator | 2025-11-08 13:49:38 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:38.831057 | orchestrator | 2025-11-08 13:49:38 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:41.865228 | orchestrator | 2025-11-08 13:49:41 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:41.865491 | orchestrator | 2025-11-08 13:49:41 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:41.866636 | orchestrator | 2025-11-08 13:49:41 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:44.913019 | orchestrator | 2025-11-08 13:49:44 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:44.913510 | orchestrator | 2025-11-08 13:49:44 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:44.913542 | orchestrator | 2025-11-08 13:49:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:47.966375 | orchestrator | 2025-11-08 13:49:47 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:47.966656 | orchestrator | 2025-11-08 13:49:47 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:47.966732 | orchestrator | 2025-11-08 13:49:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:51.015255 | orchestrator | 2025-11-08 13:49:51 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:51.019961 | orchestrator | 2025-11-08 13:49:51 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:51.020024 | orchestrator | 2025-11-08 13:49:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:54.067107 | orchestrator | 2025-11-08 13:49:54 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:54.068350 | orchestrator | 2025-11-08 13:49:54 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:54.068494 | orchestrator | 2025-11-08 13:49:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:49:57.114278 | orchestrator | 2025-11-08 13:49:57 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:49:57.115163 | orchestrator | 2025-11-08 13:49:57 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:49:57.115201 | orchestrator | 2025-11-08 13:49:57 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:00.153890 | orchestrator | 2025-11-08 13:50:00 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:00.153981 | orchestrator | 2025-11-08 13:50:00 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:00.153993 | orchestrator | 2025-11-08 13:50:00 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:03.205614 | orchestrator | 2025-11-08 13:50:03 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:03.206940 | orchestrator | 2025-11-08 13:50:03 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:03.206996 | orchestrator | 2025-11-08 13:50:03 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:06.267510 | orchestrator | 2025-11-08 13:50:06 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:06.272258 | orchestrator | 2025-11-08 13:50:06 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:06.272336 | orchestrator | 2025-11-08 13:50:06 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:09.323373 | orchestrator | 2025-11-08 13:50:09 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:09.324810 | orchestrator | 2025-11-08 13:50:09 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:09.324840 | orchestrator | 2025-11-08 13:50:09 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:12.375733 | orchestrator | 2025-11-08 13:50:12 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:12.377933 | orchestrator | 2025-11-08 13:50:12 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:12.377978 | orchestrator | 2025-11-08 13:50:12 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:15.415418 | orchestrator | 2025-11-08 13:50:15 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:15.415830 | orchestrator | 2025-11-08 13:50:15 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:15.415922 | orchestrator | 2025-11-08 13:50:15 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:18.477399 | orchestrator | 2025-11-08 13:50:18 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:18.478983 | orchestrator | 2025-11-08 13:50:18 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:18.479246 | orchestrator | 2025-11-08 13:50:18 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:21.534584 | orchestrator | 2025-11-08 13:50:21 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:21.538430 | orchestrator | 2025-11-08 13:50:21 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:21.539024 | orchestrator | 2025-11-08 13:50:21 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:24.586785 | orchestrator | 2025-11-08 13:50:24 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:24.588127 | orchestrator | 2025-11-08 13:50:24 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:24.588249 | orchestrator | 2025-11-08 13:50:24 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:27.630168 | orchestrator | 2025-11-08 13:50:27 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:27.633487 | orchestrator | 2025-11-08 13:50:27 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:27.633544 | orchestrator | 2025-11-08 13:50:27 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:30.680144 | orchestrator | 2025-11-08 13:50:30 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:30.682335 | orchestrator | 2025-11-08 13:50:30 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:30.682371 | orchestrator | 2025-11-08 13:50:30 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:33.731781 | orchestrator | 2025-11-08 13:50:33 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:33.732079 | orchestrator | 2025-11-08 13:50:33 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:33.732143 | orchestrator | 2025-11-08 13:50:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:36.787860 | orchestrator | 2025-11-08 13:50:36 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:36.788368 | orchestrator | 2025-11-08 13:50:36 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:36.788398 | orchestrator | 2025-11-08 13:50:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:39.830536 | orchestrator | 2025-11-08 13:50:39 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:39.831518 | orchestrator | 2025-11-08 13:50:39 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:39.831597 | orchestrator | 2025-11-08 13:50:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:42.865642 | orchestrator | 2025-11-08 13:50:42 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:42.865776 | orchestrator | 2025-11-08 13:50:42 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:42.865792 | orchestrator | 2025-11-08 13:50:42 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:45.918592 | orchestrator | 2025-11-08 13:50:45 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:45.920147 | orchestrator | 2025-11-08 13:50:45 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:45.920213 | orchestrator | 2025-11-08 13:50:45 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:48.964723 | orchestrator | 2025-11-08 13:50:48 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:48.965462 | orchestrator | 2025-11-08 13:50:48 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:48.965497 | orchestrator | 2025-11-08 13:50:48 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:52.004354 | orchestrator | 2025-11-08 13:50:52 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:52.004441 | orchestrator | 2025-11-08 13:50:52 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:52.004906 | orchestrator | 2025-11-08 13:50:52 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:55.040550 | orchestrator | 2025-11-08 13:50:55 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:55.042886 | orchestrator | 2025-11-08 13:50:55 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:55.043461 | orchestrator | 2025-11-08 13:50:55 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:50:58.087341 | orchestrator | 2025-11-08 13:50:58 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:50:58.088313 | orchestrator | 2025-11-08 13:50:58 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:50:58.088400 | orchestrator | 2025-11-08 13:50:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:01.146523 | orchestrator | 2025-11-08 13:51:01 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:01.151922 | orchestrator | 2025-11-08 13:51:01 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:01.152161 | orchestrator | 2025-11-08 13:51:01 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:04.210306 | orchestrator | 2025-11-08 13:51:04 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:04.211814 | orchestrator | 2025-11-08 13:51:04 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:04.211896 | orchestrator | 2025-11-08 13:51:04 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:07.261673 | orchestrator | 2025-11-08 13:51:07 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:07.264007 | orchestrator | 2025-11-08 13:51:07 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:07.264056 | orchestrator | 2025-11-08 13:51:07 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:10.304523 | orchestrator | 2025-11-08 13:51:10 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:10.305609 | orchestrator | 2025-11-08 13:51:10 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:10.306165 | orchestrator | 2025-11-08 13:51:10 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:13.353495 | orchestrator | 2025-11-08 13:51:13 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:13.355018 | orchestrator | 2025-11-08 13:51:13 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:13.355058 | orchestrator | 2025-11-08 13:51:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:16.392890 | orchestrator | 2025-11-08 13:51:16 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:16.393015 | orchestrator | 2025-11-08 13:51:16 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:16.393031 | orchestrator | 2025-11-08 13:51:16 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:19.445164 | orchestrator | 2025-11-08 13:51:19 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:19.445703 | orchestrator | 2025-11-08 13:51:19 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:19.445734 | orchestrator | 2025-11-08 13:51:19 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:22.487396 | orchestrator | 2025-11-08 13:51:22 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:22.488275 | orchestrator | 2025-11-08 13:51:22 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:22.488416 | orchestrator | 2025-11-08 13:51:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:25.524162 | orchestrator | 2025-11-08 13:51:25 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:25.525011 | orchestrator | 2025-11-08 13:51:25 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:25.525049 | orchestrator | 2025-11-08 13:51:25 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:28.571242 | orchestrator | 2025-11-08 13:51:28 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:28.573079 | orchestrator | 2025-11-08 13:51:28 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:28.573442 | orchestrator | 2025-11-08 13:51:28 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:31.614644 | orchestrator | 2025-11-08 13:51:31 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:31.614974 | orchestrator | 2025-11-08 13:51:31 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:31.615005 | orchestrator | 2025-11-08 13:51:31 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:34.648409 | orchestrator | 2025-11-08 13:51:34 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:34.648757 | orchestrator | 2025-11-08 13:51:34 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:34.649054 | orchestrator | 2025-11-08 13:51:34 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:37.703943 | orchestrator | 2025-11-08 13:51:37 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:37.706885 | orchestrator | 2025-11-08 13:51:37 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:37.706925 | orchestrator | 2025-11-08 13:51:37 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:40.741965 | orchestrator | 2025-11-08 13:51:40 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:40.743179 | orchestrator | 2025-11-08 13:51:40 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:40.743198 | orchestrator | 2025-11-08 13:51:40 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:43.781357 | orchestrator | 2025-11-08 13:51:43 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:43.782939 | orchestrator | 2025-11-08 13:51:43 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:43.782972 | orchestrator | 2025-11-08 13:51:43 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:46.828951 | orchestrator | 2025-11-08 13:51:46 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:46.831305 | orchestrator | 2025-11-08 13:51:46 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:46.831920 | orchestrator | 2025-11-08 13:51:46 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:49.864377 | orchestrator | 2025-11-08 13:51:49 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:49.865542 | orchestrator | 2025-11-08 13:51:49 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:49.865856 | orchestrator | 2025-11-08 13:51:49 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:52.910456 | orchestrator | 2025-11-08 13:51:52 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:52.911895 | orchestrator | 2025-11-08 13:51:52 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:52.911940 | orchestrator | 2025-11-08 13:51:52 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:55.977725 | orchestrator | 2025-11-08 13:51:55 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:55.980258 | orchestrator | 2025-11-08 13:51:55 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:55.980327 | orchestrator | 2025-11-08 13:51:55 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:51:59.011456 | orchestrator | 2025-11-08 13:51:59 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:51:59.011871 | orchestrator | 2025-11-08 13:51:59 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:51:59.011905 | orchestrator | 2025-11-08 13:51:59 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:02.081689 | orchestrator | 2025-11-08 13:52:02 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state STARTED 2025-11-08 13:52:02.085776 | orchestrator | 2025-11-08 13:52:02 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:02.085872 | orchestrator | 2025-11-08 13:52:02 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:05.138413 | orchestrator | 2025-11-08 13:52:05 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:05.148129 | orchestrator | 2025-11-08 13:52:05 | INFO  | Task 61536788-025b-419b-b284-1bddba8cb877 is in state SUCCESS 2025-11-08 13:52:05.149753 | orchestrator | 2025-11-08 13:52:05.149811 | orchestrator | 2025-11-08 13:52:05.149837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:52:05.149990 | orchestrator | 2025-11-08 13:52:05.150069 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:52:05.150085 | orchestrator | Saturday 08 November 2025 13:45:24 +0000 (0:00:00.265) 0:00:00.265 ***** 2025-11-08 13:52:05.150096 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.150109 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.150120 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.150131 | orchestrator | 2025-11-08 13:52:05.150142 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:52:05.150153 | orchestrator | Saturday 08 November 2025 13:45:24 +0000 (0:00:00.405) 0:00:00.670 ***** 2025-11-08 13:52:05.150165 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-11-08 13:52:05.150177 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-11-08 13:52:05.150188 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-11-08 13:52:05.150198 | orchestrator | 2025-11-08 13:52:05.150209 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-11-08 13:52:05.150734 | orchestrator | 2025-11-08 13:52:05.150771 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-11-08 13:52:05.150783 | orchestrator | Saturday 08 November 2025 13:45:25 +0000 (0:00:00.687) 0:00:01.357 ***** 2025-11-08 13:52:05.150794 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.150805 | orchestrator | 2025-11-08 13:52:05.150816 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-11-08 13:52:05.150827 | orchestrator | Saturday 08 November 2025 13:45:26 +0000 (0:00:00.797) 0:00:02.155 ***** 2025-11-08 13:52:05.150838 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.150849 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.150860 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.150870 | orchestrator | 2025-11-08 13:52:05.150984 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-11-08 13:52:05.150998 | orchestrator | Saturday 08 November 2025 13:45:26 +0000 (0:00:00.608) 0:00:02.763 ***** 2025-11-08 13:52:05.151009 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.151020 | orchestrator | 2025-11-08 13:52:05.151031 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-11-08 13:52:05.151042 | orchestrator | Saturday 08 November 2025 13:45:27 +0000 (0:00:00.640) 0:00:03.404 ***** 2025-11-08 13:52:05.151053 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.151064 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.151075 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.151086 | orchestrator | 2025-11-08 13:52:05.151096 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-11-08 13:52:05.151108 | orchestrator | Saturday 08 November 2025 13:45:28 +0000 (0:00:00.717) 0:00:04.121 ***** 2025-11-08 13:52:05.151119 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-08 13:52:05.151130 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-08 13:52:05.151141 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-08 13:52:05.151152 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-08 13:52:05.152931 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-08 13:52:05.152946 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-08 13:52:05.152956 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-08 13:52:05.152967 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-08 13:52:05.152977 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-08 13:52:05.152987 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-08 13:52:05.152997 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-08 13:52:05.153006 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-08 13:52:05.153015 | orchestrator | 2025-11-08 13:52:05.153025 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-08 13:52:05.153035 | orchestrator | Saturday 08 November 2025 13:45:31 +0000 (0:00:03.080) 0:00:07.202 ***** 2025-11-08 13:52:05.153045 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-11-08 13:52:05.153055 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-11-08 13:52:05.153065 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-11-08 13:52:05.153074 | orchestrator | 2025-11-08 13:52:05.153084 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-08 13:52:05.153094 | orchestrator | Saturday 08 November 2025 13:45:32 +0000 (0:00:01.041) 0:00:08.244 ***** 2025-11-08 13:52:05.153103 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-11-08 13:52:05.153113 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-11-08 13:52:05.153123 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-11-08 13:52:05.153132 | orchestrator | 2025-11-08 13:52:05.153142 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-08 13:52:05.153151 | orchestrator | Saturday 08 November 2025 13:45:34 +0000 (0:00:01.714) 0:00:09.958 ***** 2025-11-08 13:52:05.153272 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-11-08 13:52:05.153290 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.153328 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-11-08 13:52:05.153344 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.153359 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-11-08 13:52:05.153375 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.153391 | orchestrator | 2025-11-08 13:52:05.153408 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-11-08 13:52:05.153425 | orchestrator | Saturday 08 November 2025 13:45:35 +0000 (0:00:01.021) 0:00:10.980 ***** 2025-11-08 13:52:05.153455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.153479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.153532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.153545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.153558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.153579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.153592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.153609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.153620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.153637 | orchestrator | 2025-11-08 13:52:05.153649 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-11-08 13:52:05.153660 | orchestrator | Saturday 08 November 2025 13:45:38 +0000 (0:00:03.018) 0:00:13.998 ***** 2025-11-08 13:52:05.153671 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.153682 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.153693 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.153704 | orchestrator | 2025-11-08 13:52:05.153715 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-11-08 13:52:05.153726 | orchestrator | Saturday 08 November 2025 13:45:40 +0000 (0:00:01.842) 0:00:15.841 ***** 2025-11-08 13:52:05.153737 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-11-08 13:52:05.153748 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-11-08 13:52:05.153759 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-11-08 13:52:05.153770 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-11-08 13:52:05.153781 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-11-08 13:52:05.153792 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-11-08 13:52:05.153802 | orchestrator | 2025-11-08 13:52:05.153814 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-11-08 13:52:05.153823 | orchestrator | Saturday 08 November 2025 13:45:43 +0000 (0:00:03.402) 0:00:19.243 ***** 2025-11-08 13:52:05.153833 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.153842 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.153852 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.153861 | orchestrator | 2025-11-08 13:52:05.153871 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-11-08 13:52:05.153880 | orchestrator | Saturday 08 November 2025 13:45:44 +0000 (0:00:01.277) 0:00:20.520 ***** 2025-11-08 13:52:05.153890 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.153899 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.153909 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.153918 | orchestrator | 2025-11-08 13:52:05.153928 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-11-08 13:52:05.153937 | orchestrator | Saturday 08 November 2025 13:45:47 +0000 (0:00:02.963) 0:00:23.483 ***** 2025-11-08 13:52:05.153947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.153976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.153987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.154009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-08 13:52:05.154230 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.154249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.154267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.154284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.154302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-08 13:52:05.154320 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.154350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.154389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.154401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.154411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-08 13:52:05.154421 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.154432 | orchestrator | 2025-11-08 13:52:05.154441 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-11-08 13:52:05.154452 | orchestrator | Saturday 08 November 2025 13:45:50 +0000 (0:00:02.839) 0:00:26.323 ***** 2025-11-08 13:52:05.154462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.154567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-08 13:52:05.154577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.154597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-08 13:52:05.154621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.154647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d', '__omit_place_holder__9e2624fd9a88e82d9933b5007e9355a055a5a44d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-08 13:52:05.154658 | orchestrator | 2025-11-08 13:52:05.154668 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-11-08 13:52:05.154678 | orchestrator | Saturday 08 November 2025 13:45:54 +0000 (0:00:03.631) 0:00:29.954 ***** 2025-11-08 13:52:05.154688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.154793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.154803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.154813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.154823 | orchestrator | 2025-11-08 13:52:05.155003 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-11-08 13:52:05.155021 | orchestrator | Saturday 08 November 2025 13:45:58 +0000 (0:00:04.284) 0:00:34.238 ***** 2025-11-08 13:52:05.155031 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-08 13:52:05.155042 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-08 13:52:05.155051 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-08 13:52:05.155061 | orchestrator | 2025-11-08 13:52:05.155099 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-11-08 13:52:05.155109 | orchestrator | Saturday 08 November 2025 13:46:00 +0000 (0:00:02.263) 0:00:36.502 ***** 2025-11-08 13:52:05.155119 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-08 13:52:05.155129 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-08 13:52:05.155138 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-08 13:52:05.155148 | orchestrator | 2025-11-08 13:52:05.155172 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-11-08 13:52:05.155183 | orchestrator | Saturday 08 November 2025 13:46:06 +0000 (0:00:05.511) 0:00:42.013 ***** 2025-11-08 13:52:05.155192 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.155202 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.155211 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.155240 | orchestrator | 2025-11-08 13:52:05.155258 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-11-08 13:52:05.155274 | orchestrator | Saturday 08 November 2025 13:46:07 +0000 (0:00:01.531) 0:00:43.545 ***** 2025-11-08 13:52:05.155290 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-08 13:52:05.155307 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-08 13:52:05.155333 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-08 13:52:05.155350 | orchestrator | 2025-11-08 13:52:05.155362 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-11-08 13:52:05.155372 | orchestrator | Saturday 08 November 2025 13:46:10 +0000 (0:00:03.265) 0:00:46.811 ***** 2025-11-08 13:52:05.155382 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-08 13:52:05.155392 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-08 13:52:05.155401 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-08 13:52:05.155411 | orchestrator | 2025-11-08 13:52:05.155465 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-11-08 13:52:05.155475 | orchestrator | Saturday 08 November 2025 13:46:15 +0000 (0:00:04.698) 0:00:51.510 ***** 2025-11-08 13:52:05.155544 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-11-08 13:52:05.155559 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-11-08 13:52:05.155569 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-11-08 13:52:05.155579 | orchestrator | 2025-11-08 13:52:05.155589 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-11-08 13:52:05.155673 | orchestrator | Saturday 08 November 2025 13:46:17 +0000 (0:00:02.106) 0:00:53.617 ***** 2025-11-08 13:52:05.155682 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-11-08 13:52:05.155690 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-11-08 13:52:05.155706 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-11-08 13:52:05.155714 | orchestrator | 2025-11-08 13:52:05.155722 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-11-08 13:52:05.155730 | orchestrator | Saturday 08 November 2025 13:46:20 +0000 (0:00:03.134) 0:00:56.752 ***** 2025-11-08 13:52:05.155738 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.155745 | orchestrator | 2025-11-08 13:52:05.155753 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-11-08 13:52:05.155761 | orchestrator | Saturday 08 November 2025 13:46:22 +0000 (0:00:01.249) 0:00:58.001 ***** 2025-11-08 13:52:05.155770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.155780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.155795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.155809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.155818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.155827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.155851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.155860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.155869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.155877 | orchestrator | 2025-11-08 13:52:05.155885 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-11-08 13:52:05.155893 | orchestrator | Saturday 08 November 2025 13:46:25 +0000 (0:00:03.739) 0:01:01.741 ***** 2025-11-08 13:52:05.155909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.155922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.155931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.155945 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.155954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.155962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.155971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.155979 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.155987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156029 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.156037 | orchestrator | 2025-11-08 13:52:05.156045 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-11-08 13:52:05.156053 | orchestrator | Saturday 08 November 2025 13:46:26 +0000 (0:00:00.763) 0:01:02.505 ***** 2025-11-08 13:52:05.156061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156086 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.156094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156135 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.156143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156168 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.156176 | orchestrator | 2025-11-08 13:52:05.156183 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-11-08 13:52:05.156191 | orchestrator | Saturday 08 November 2025 13:46:27 +0000 (0:00:00.764) 0:01:03.270 ***** 2025-11-08 13:52:05.156200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156354 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.156362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156475 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.156509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156518 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.156526 | orchestrator | 2025-11-08 13:52:05.156534 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-11-08 13:52:05.156547 | orchestrator | Saturday 08 November 2025 13:46:28 +0000 (0:00:01.241) 0:01:04.511 ***** 2025-11-08 13:52:05.156559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156585 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.156593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156618 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.156631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156665 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.156673 | orchestrator | 2025-11-08 13:52:05.156681 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-11-08 13:52:05.156689 | orchestrator | Saturday 08 November 2025 13:46:29 +0000 (0:00:00.638) 0:01:05.150 ***** 2025-11-08 13:52:05.156697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156721 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.156735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156769 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.156777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156801 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.156809 | orchestrator | 2025-11-08 13:52:05.156817 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-11-08 13:52:05.156825 | orchestrator | Saturday 08 November 2025 13:46:30 +0000 (0:00:00.802) 0:01:05.952 ***** 2025-11-08 13:52:05.156834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156875 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.156884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.156892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.156900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.156908 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.156916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.157030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.157042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.157050 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.157058 | orchestrator | 2025-11-08 13:52:05.157066 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-11-08 13:52:05.157074 | orchestrator | Saturday 08 November 2025 13:46:31 +0000 (0:00:01.381) 0:01:07.333 ***** 2025-11-08 13:52:05.157086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.157094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.157103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.157111 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.157119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.157133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.157150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.157159 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.157171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.157179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.157187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.157195 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.157203 | orchestrator | 2025-11-08 13:52:05.157211 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-11-08 13:52:05.157219 | orchestrator | Saturday 08 November 2025 13:46:32 +0000 (0:00:00.832) 0:01:08.166 ***** 2025-11-08 13:52:05.157227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.157246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.157260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.157274 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.157295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.157319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.157334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.157347 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.157360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-08 13:52:05.157401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-08 13:52:05.157417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-08 13:52:05.157427 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.157435 | orchestrator | 2025-11-08 13:52:05.157443 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-11-08 13:52:05.157451 | orchestrator | Saturday 08 November 2025 13:46:33 +0000 (0:00:01.050) 0:01:09.217 ***** 2025-11-08 13:52:05.157459 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-08 13:52:05.157468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-08 13:52:05.157483 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-08 13:52:05.157515 | orchestrator | 2025-11-08 13:52:05.157524 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-11-08 13:52:05.157531 | orchestrator | Saturday 08 November 2025 13:46:35 +0000 (0:00:02.000) 0:01:11.217 ***** 2025-11-08 13:52:05.157539 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-08 13:52:05.157547 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-08 13:52:05.157555 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-08 13:52:05.157563 | orchestrator | 2025-11-08 13:52:05.157571 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-11-08 13:52:05.157579 | orchestrator | Saturday 08 November 2025 13:46:37 +0000 (0:00:01.636) 0:01:12.854 ***** 2025-11-08 13:52:05.157587 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-08 13:52:05.157599 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-08 13:52:05.157607 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-08 13:52:05.157615 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-08 13:52:05.157623 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.157631 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-08 13:52:05.157639 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.157647 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-08 13:52:05.157655 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.157669 | orchestrator | 2025-11-08 13:52:05.157677 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-11-08 13:52:05.157685 | orchestrator | Saturday 08 November 2025 13:46:38 +0000 (0:00:01.485) 0:01:14.339 ***** 2025-11-08 13:52:05.157772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.157783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.157791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-08 13:52:05.157825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.157835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.157848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-08 13:52:05.157862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.157870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.157879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-08 13:52:05.157887 | orchestrator | 2025-11-08 13:52:05.157895 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-11-08 13:52:05.157902 | orchestrator | Saturday 08 November 2025 13:46:41 +0000 (0:00:03.403) 0:01:17.743 ***** 2025-11-08 13:52:05.157910 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.157918 | orchestrator | 2025-11-08 13:52:05.157926 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-11-08 13:52:05.157934 | orchestrator | Saturday 08 November 2025 13:46:42 +0000 (0:00:00.859) 0:01:18.602 ***** 2025-11-08 13:52:05.157943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-08 13:52:05.159144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.159179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.159197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-08 13:52:05.159204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.159211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.159218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-08 13:52:05.159275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.159294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.159313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.159325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.159336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.159347 | orchestrator | 2025-11-08 13:52:05.159357 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-11-08 13:52:05.159367 | orchestrator | Saturday 08 November 2025 13:46:47 +0000 (0:00:04.533) 0:01:23.136 ***** 2025-11-08 13:52:05.159375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-08 13:52:05.159435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.160525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.160555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.160562 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.160570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-08 13:52:05.160577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.160584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.160591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.160598 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.160764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-08 13:52:05.160787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.160794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.160801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.160808 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.160814 | orchestrator | 2025-11-08 13:52:05.160822 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-11-08 13:52:05.160829 | orchestrator | Saturday 08 November 2025 13:46:48 +0000 (0:00:01.050) 0:01:24.186 ***** 2025-11-08 13:52:05.160836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-08 13:52:05.160846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-08 13:52:05.160901 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.160911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-08 13:52:05.160918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-08 13:52:05.160925 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.160932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-08 13:52:05.160944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-08 13:52:05.160951 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.160958 | orchestrator | 2025-11-08 13:52:05.161040 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-11-08 13:52:05.161056 | orchestrator | Saturday 08 November 2025 13:46:49 +0000 (0:00:00.877) 0:01:25.064 ***** 2025-11-08 13:52:05.161067 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.161077 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.161087 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.161096 | orchestrator | 2025-11-08 13:52:05.161106 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-11-08 13:52:05.161116 | orchestrator | Saturday 08 November 2025 13:46:50 +0000 (0:00:01.731) 0:01:26.795 ***** 2025-11-08 13:52:05.161125 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.161136 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.161147 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.161158 | orchestrator | 2025-11-08 13:52:05.161168 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-11-08 13:52:05.161178 | orchestrator | Saturday 08 November 2025 13:46:53 +0000 (0:00:02.591) 0:01:29.387 ***** 2025-11-08 13:52:05.161185 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.161191 | orchestrator | 2025-11-08 13:52:05.161203 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-11-08 13:52:05.161210 | orchestrator | Saturday 08 November 2025 13:46:54 +0000 (0:00:00.842) 0:01:30.229 ***** 2025-11-08 13:52:05.161218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.161228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.161236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.161248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.161348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.161372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.161384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.161396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.161404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.161417 | orchestrator | 2025-11-08 13:52:05.161424 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-11-08 13:52:05.161432 | orchestrator | Saturday 08 November 2025 13:46:58 +0000 (0:00:03.769) 0:01:33.999 ***** 2025-11-08 13:52:05.162225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.162333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.162352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.162365 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.162379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.162391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.162424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.162432 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.162455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.162467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.162474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.162480 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.162516 | orchestrator | 2025-11-08 13:52:05.162524 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-11-08 13:52:05.162533 | orchestrator | Saturday 08 November 2025 13:46:58 +0000 (0:00:00.686) 0:01:34.686 ***** 2025-11-08 13:52:05.162540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-08 13:52:05.162550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-08 13:52:05.162562 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.162569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-08 13:52:05.162575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-08 13:52:05.162581 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.162588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-08 13:52:05.162594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-08 13:52:05.162600 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.162606 | orchestrator | 2025-11-08 13:52:05.162613 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-11-08 13:52:05.162619 | orchestrator | Saturday 08 November 2025 13:47:00 +0000 (0:00:01.236) 0:01:35.923 ***** 2025-11-08 13:52:05.162625 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.162632 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.162638 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.162644 | orchestrator | 2025-11-08 13:52:05.162650 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-11-08 13:52:05.162656 | orchestrator | Saturday 08 November 2025 13:47:01 +0000 (0:00:01.435) 0:01:37.358 ***** 2025-11-08 13:52:05.162662 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.162668 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.162674 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.162681 | orchestrator | 2025-11-08 13:52:05.162695 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-11-08 13:52:05.162701 | orchestrator | Saturday 08 November 2025 13:47:03 +0000 (0:00:02.238) 0:01:39.596 ***** 2025-11-08 13:52:05.162708 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.162714 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.162720 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.162726 | orchestrator | 2025-11-08 13:52:05.162733 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-11-08 13:52:05.162739 | orchestrator | Saturday 08 November 2025 13:47:04 +0000 (0:00:00.356) 0:01:39.953 ***** 2025-11-08 13:52:05.162745 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.162751 | orchestrator | 2025-11-08 13:52:05.162758 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-11-08 13:52:05.162764 | orchestrator | Saturday 08 November 2025 13:47:05 +0000 (0:00:01.076) 0:01:41.029 ***** 2025-11-08 13:52:05.162775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-08 13:52:05.162789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-08 13:52:05.162796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-08 13:52:05.162803 | orchestrator | 2025-11-08 13:52:05.162809 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-11-08 13:52:05.162816 | orchestrator | Saturday 08 November 2025 13:47:08 +0000 (0:00:03.481) 0:01:44.511 ***** 2025-11-08 13:52:05.162830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-08 13:52:05.162836 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.162843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-08 13:52:05.162850 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.162856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-08 13:52:05.162868 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.162874 | orchestrator | 2025-11-08 13:52:05.162880 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-11-08 13:52:05.162886 | orchestrator | Saturday 08 November 2025 13:47:11 +0000 (0:00:02.785) 0:01:47.297 ***** 2025-11-08 13:52:05.162895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-08 13:52:05.162904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-08 13:52:05.162913 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.162981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-08 13:52:05.162994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-08 13:52:05.163000 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.163026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-08 13:52:05.163033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-08 13:52:05.163039 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.163045 | orchestrator | 2025-11-08 13:52:05.163051 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-11-08 13:52:05.163064 | orchestrator | Saturday 08 November 2025 13:47:15 +0000 (0:00:04.023) 0:01:51.320 ***** 2025-11-08 13:52:05.163070 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.163080 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.163086 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.163092 | orchestrator | 2025-11-08 13:52:05.163098 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-11-08 13:52:05.163104 | orchestrator | Saturday 08 November 2025 13:47:16 +0000 (0:00:00.636) 0:01:51.956 ***** 2025-11-08 13:52:05.163110 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.163117 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.163123 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.163129 | orchestrator | 2025-11-08 13:52:05.163135 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-11-08 13:52:05.163141 | orchestrator | Saturday 08 November 2025 13:47:17 +0000 (0:00:01.439) 0:01:53.395 ***** 2025-11-08 13:52:05.163148 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.163154 | orchestrator | 2025-11-08 13:52:05.163160 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-11-08 13:52:05.163166 | orchestrator | Saturday 08 November 2025 13:47:18 +0000 (0:00:00.778) 0:01:54.174 ***** 2025-11-08 13:52:05.163173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.163181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.163228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.163290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163324 | orchestrator | 2025-11-08 13:52:05.163335 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-11-08 13:52:05.163347 | orchestrator | Saturday 08 November 2025 13:47:22 +0000 (0:00:04.293) 0:01:58.467 ***** 2025-11-08 13:52:05.163358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.163369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163405 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.163411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.163418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163449 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.163459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.163466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163509 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.163516 | orchestrator | 2025-11-08 13:52:05.163522 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-11-08 13:52:05.163529 | orchestrator | Saturday 08 November 2025 13:47:23 +0000 (0:00:01.189) 0:01:59.657 ***** 2025-11-08 13:52:05.163535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-08 13:52:05.163547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-08 13:52:05.163554 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.163560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-08 13:52:05.163567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-08 13:52:05.163573 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.163583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-08 13:52:05.163590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-08 13:52:05.163596 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.163603 | orchestrator | 2025-11-08 13:52:05.163609 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-11-08 13:52:05.163615 | orchestrator | Saturday 08 November 2025 13:47:24 +0000 (0:00:00.940) 0:02:00.597 ***** 2025-11-08 13:52:05.163621 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.163628 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.163634 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.163640 | orchestrator | 2025-11-08 13:52:05.163646 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-11-08 13:52:05.163652 | orchestrator | Saturday 08 November 2025 13:47:26 +0000 (0:00:01.305) 0:02:01.903 ***** 2025-11-08 13:52:05.163659 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.163665 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.163671 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.163677 | orchestrator | 2025-11-08 13:52:05.163683 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-11-08 13:52:05.163689 | orchestrator | Saturday 08 November 2025 13:47:28 +0000 (0:00:02.014) 0:02:03.918 ***** 2025-11-08 13:52:05.163696 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.163702 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.163708 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.163714 | orchestrator | 2025-11-08 13:52:05.163720 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-11-08 13:52:05.163727 | orchestrator | Saturday 08 November 2025 13:47:28 +0000 (0:00:00.441) 0:02:04.359 ***** 2025-11-08 13:52:05.163733 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.163739 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.163745 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.163751 | orchestrator | 2025-11-08 13:52:05.163758 | orchestrator | TASK [include_role : designate] ************************************************ 2025-11-08 13:52:05.163768 | orchestrator | Saturday 08 November 2025 13:47:28 +0000 (0:00:00.270) 0:02:04.629 ***** 2025-11-08 13:52:05.163774 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.163780 | orchestrator | 2025-11-08 13:52:05.163786 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-11-08 13:52:05.163793 | orchestrator | Saturday 08 November 2025 13:47:29 +0000 (0:00:00.822) 0:02:05.452 ***** 2025-11-08 13:52:05.163799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 13:52:05.163811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 13:52:05.163825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 13:52:05.163831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 13:52:05.163856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 13:52:05.163884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 13:52:05.163916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.163985 | orchestrator | 2025-11-08 13:52:05.163991 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-11-08 13:52:05.163997 | orchestrator | Saturday 08 November 2025 13:47:33 +0000 (0:00:04.192) 0:02:09.644 ***** 2025-11-08 13:52:05.164009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 13:52:05.164020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 13:52:05.164026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 13:52:05.164077 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.164084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 13:52:05.164095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164131 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.164141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 13:52:05.164152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 13:52:05.164159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.164203 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.164209 | orchestrator | 2025-11-08 13:52:05.164215 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-11-08 13:52:05.164222 | orchestrator | Saturday 08 November 2025 13:47:34 +0000 (0:00:01.031) 0:02:10.676 ***** 2025-11-08 13:52:05.164229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-08 13:52:05.164235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-08 13:52:05.164243 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.164250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-08 13:52:05.164256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-08 13:52:05.164262 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.164268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-08 13:52:05.164275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-08 13:52:05.164281 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.164287 | orchestrator | 2025-11-08 13:52:05.164293 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-11-08 13:52:05.164300 | orchestrator | Saturday 08 November 2025 13:47:35 +0000 (0:00:01.023) 0:02:11.699 ***** 2025-11-08 13:52:05.164306 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.164312 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.164318 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.164324 | orchestrator | 2025-11-08 13:52:05.164330 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-11-08 13:52:05.164337 | orchestrator | Saturday 08 November 2025 13:47:37 +0000 (0:00:01.970) 0:02:13.669 ***** 2025-11-08 13:52:05.164343 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.164349 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.164355 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.164361 | orchestrator | 2025-11-08 13:52:05.164367 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-11-08 13:52:05.164373 | orchestrator | Saturday 08 November 2025 13:47:39 +0000 (0:00:01.911) 0:02:15.581 ***** 2025-11-08 13:52:05.164380 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.164386 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.164392 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.164398 | orchestrator | 2025-11-08 13:52:05.164404 | orchestrator | TASK [include_role : glance] *************************************************** 2025-11-08 13:52:05.164410 | orchestrator | Saturday 08 November 2025 13:47:40 +0000 (0:00:00.675) 0:02:16.256 ***** 2025-11-08 13:52:05.164417 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.164423 | orchestrator | 2025-11-08 13:52:05.164429 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-11-08 13:52:05.164435 | orchestrator | Saturday 08 November 2025 13:47:41 +0000 (0:00:00.887) 0:02:17.144 ***** 2025-11-08 13:52:05.164460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 13:52:05.164470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.164497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 13:52:05.164510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.164523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 13:52:05.164539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.164546 | orchestrator | 2025-11-08 13:52:05.164553 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-11-08 13:52:05.164559 | orchestrator | Saturday 08 November 2025 13:47:45 +0000 (0:00:04.497) 0:02:21.641 ***** 2025-11-08 13:52:05.165856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-08 13:52:05.165902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.165910 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.165918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-08 13:52:05.165941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.165948 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.165955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-08 13:52:05.165969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.165980 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.165986 | orchestrator | 2025-11-08 13:52:05.165993 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-11-08 13:52:05.166001 | orchestrator | Saturday 08 November 2025 13:47:49 +0000 (0:00:03.680) 0:02:25.321 ***** 2025-11-08 13:52:05.166009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-08 13:52:05.166048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-08 13:52:05.166057 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.166064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-08 13:52:05.166072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-08 13:52:05.166083 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.166090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-08 13:52:05.166104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-08 13:52:05.166112 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.166119 | orchestrator | 2025-11-08 13:52:05.166126 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-11-08 13:52:05.166134 | orchestrator | Saturday 08 November 2025 13:47:53 +0000 (0:00:03.679) 0:02:29.001 ***** 2025-11-08 13:52:05.166141 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.166149 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.166156 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.166163 | orchestrator | 2025-11-08 13:52:05.166169 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-11-08 13:52:05.166175 | orchestrator | Saturday 08 November 2025 13:47:54 +0000 (0:00:01.389) 0:02:30.390 ***** 2025-11-08 13:52:05.166184 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.166190 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.166197 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.166203 | orchestrator | 2025-11-08 13:52:05.166209 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-11-08 13:52:05.166216 | orchestrator | Saturday 08 November 2025 13:47:56 +0000 (0:00:02.340) 0:02:32.731 ***** 2025-11-08 13:52:05.166222 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.166228 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.166234 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.166240 | orchestrator | 2025-11-08 13:52:05.166247 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-11-08 13:52:05.166253 | orchestrator | Saturday 08 November 2025 13:47:57 +0000 (0:00:00.573) 0:02:33.305 ***** 2025-11-08 13:52:05.166259 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.166265 | orchestrator | 2025-11-08 13:52:05.166271 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-11-08 13:52:05.166277 | orchestrator | Saturday 08 November 2025 13:47:58 +0000 (0:00:00.914) 0:02:34.220 ***** 2025-11-08 13:52:05.166285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 13:52:05.166296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 13:52:05.166303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 13:52:05.166309 | orchestrator | 2025-11-08 13:52:05.166316 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-11-08 13:52:05.166322 | orchestrator | Saturday 08 November 2025 13:48:01 +0000 (0:00:03.372) 0:02:37.592 ***** 2025-11-08 13:52:05.166334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-08 13:52:05.166344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-08 13:52:05.166350 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.166357 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.166363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-08 13:52:05.166369 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.166376 | orchestrator | 2025-11-08 13:52:05.166385 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-11-08 13:52:05.166392 | orchestrator | Saturday 08 November 2025 13:48:02 +0000 (0:00:00.703) 0:02:38.296 ***** 2025-11-08 13:52:05.166399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-08 13:52:05.166406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-08 13:52:05.166413 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.166419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-08 13:52:05.166425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-08 13:52:05.166432 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.166438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-08 13:52:05.166444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-08 13:52:05.166451 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.166457 | orchestrator | 2025-11-08 13:52:05.166463 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-11-08 13:52:05.166469 | orchestrator | Saturday 08 November 2025 13:48:03 +0000 (0:00:00.804) 0:02:39.101 ***** 2025-11-08 13:52:05.166475 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.166481 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.166503 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.166509 | orchestrator | 2025-11-08 13:52:05.166515 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-11-08 13:52:05.166521 | orchestrator | Saturday 08 November 2025 13:48:04 +0000 (0:00:01.661) 0:02:40.762 ***** 2025-11-08 13:52:05.166527 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.166533 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.166540 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.166546 | orchestrator | 2025-11-08 13:52:05.166552 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-11-08 13:52:05.166558 | orchestrator | Saturday 08 November 2025 13:48:07 +0000 (0:00:02.219) 0:02:42.982 ***** 2025-11-08 13:52:05.166564 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.166571 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.166581 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.166587 | orchestrator | 2025-11-08 13:52:05.166593 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-11-08 13:52:05.166600 | orchestrator | Saturday 08 November 2025 13:48:07 +0000 (0:00:00.441) 0:02:43.423 ***** 2025-11-08 13:52:05.166606 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.166612 | orchestrator | 2025-11-08 13:52:05.166618 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-11-08 13:52:05.166624 | orchestrator | Saturday 08 November 2025 13:48:08 +0000 (0:00:00.931) 0:02:44.354 ***** 2025-11-08 13:52:05.166668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:52:05.166691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:52:05.166702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:52:05.166713 | orchestrator | 2025-11-08 13:52:05.166719 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-11-08 13:52:05.166726 | orchestrator | Saturday 08 November 2025 13:48:12 +0000 (0:00:03.641) 0:02:47.996 ***** 2025-11-08 13:52:05.166740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-08 13:52:05.166833 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.166840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-08 13:52:05.166847 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.166863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-08 13:52:05.166875 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.166881 | orchestrator | 2025-11-08 13:52:05.166887 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-11-08 13:52:05.166894 | orchestrator | Saturday 08 November 2025 13:48:13 +0000 (0:00:01.069) 0:02:49.066 ***** 2025-11-08 13:52:05.166901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-08 13:52:05.166910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-08 13:52:05.166916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-08 13:52:05.166924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-08 13:52:05.166930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-08 13:52:05.166937 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.166943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-08 13:52:05.166950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-08 13:52:05.166956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-08 13:52:05.166967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-08 13:52:05.166978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-08 13:52:05.166987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-08 13:52:05.166993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-08 13:52:05.167000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-08 13:52:05.167006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-08 13:52:05.167012 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.167019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-08 13:52:05.167025 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.167031 | orchestrator | 2025-11-08 13:52:05.167037 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-11-08 13:52:05.167044 | orchestrator | Saturday 08 November 2025 13:48:14 +0000 (0:00:00.864) 0:02:49.930 ***** 2025-11-08 13:52:05.167050 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.167056 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.167062 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.167068 | orchestrator | 2025-11-08 13:52:05.167074 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-11-08 13:52:05.167080 | orchestrator | Saturday 08 November 2025 13:48:15 +0000 (0:00:01.181) 0:02:51.112 ***** 2025-11-08 13:52:05.167086 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.167093 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.167099 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.167105 | orchestrator | 2025-11-08 13:52:05.167111 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-11-08 13:52:05.167118 | orchestrator | Saturday 08 November 2025 13:48:17 +0000 (0:00:02.190) 0:02:53.303 ***** 2025-11-08 13:52:05.167124 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.167130 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.167136 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.167142 | orchestrator | 2025-11-08 13:52:05.167148 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-11-08 13:52:05.167154 | orchestrator | Saturday 08 November 2025 13:48:17 +0000 (0:00:00.267) 0:02:53.571 ***** 2025-11-08 13:52:05.167160 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.167166 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.167172 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.167179 | orchestrator | 2025-11-08 13:52:05.167185 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-11-08 13:52:05.167196 | orchestrator | Saturday 08 November 2025 13:48:18 +0000 (0:00:00.457) 0:02:54.029 ***** 2025-11-08 13:52:05.167203 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.167209 | orchestrator | 2025-11-08 13:52:05.167215 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-11-08 13:52:05.167221 | orchestrator | Saturday 08 November 2025 13:48:19 +0000 (0:00:00.891) 0:02:54.920 ***** 2025-11-08 13:52:05.167232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:52:05.167243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:52:05.167250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:52:05.167257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:52:05.167264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:52:05.167274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:52:05.167288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:52:05.167295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:52:05.167302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:52:05.167308 | orchestrator | 2025-11-08 13:52:05.167315 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-11-08 13:52:05.167322 | orchestrator | Saturday 08 November 2025 13:48:22 +0000 (0:00:03.266) 0:02:58.187 ***** 2025-11-08 13:52:05.167333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-08 13:52:05.167351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:52:05.167368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:52:05.167379 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.167394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-08 13:52:05.167405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:52:05.167416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:52:05.167432 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.167443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-08 13:52:05.167460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:52:05.167475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:52:05.167528 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.167537 | orchestrator | 2025-11-08 13:52:05.167544 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-11-08 13:52:05.167550 | orchestrator | Saturday 08 November 2025 13:48:22 +0000 (0:00:00.556) 0:02:58.743 ***** 2025-11-08 13:52:05.167556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-08 13:52:05.167564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-08 13:52:05.167570 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.167577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-08 13:52:05.167583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-08 13:52:05.167595 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.167601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-08 13:52:05.167608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-08 13:52:05.167614 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.167620 | orchestrator | 2025-11-08 13:52:05.167626 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-11-08 13:52:05.167633 | orchestrator | Saturday 08 November 2025 13:48:23 +0000 (0:00:00.770) 0:02:59.514 ***** 2025-11-08 13:52:05.167639 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.167645 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.167651 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.167657 | orchestrator | 2025-11-08 13:52:05.167663 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-11-08 13:52:05.167670 | orchestrator | Saturday 08 November 2025 13:48:24 +0000 (0:00:01.241) 0:03:00.756 ***** 2025-11-08 13:52:05.167676 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.167682 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.167688 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.167694 | orchestrator | 2025-11-08 13:52:05.167700 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-11-08 13:52:05.167706 | orchestrator | Saturday 08 November 2025 13:48:26 +0000 (0:00:01.898) 0:03:02.654 ***** 2025-11-08 13:52:05.167712 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.167719 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.167725 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.167731 | orchestrator | 2025-11-08 13:52:05.167737 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-11-08 13:52:05.167743 | orchestrator | Saturday 08 November 2025 13:48:27 +0000 (0:00:00.421) 0:03:03.076 ***** 2025-11-08 13:52:05.167749 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.167756 | orchestrator | 2025-11-08 13:52:05.167762 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-11-08 13:52:05.167768 | orchestrator | Saturday 08 November 2025 13:48:28 +0000 (0:00:00.904) 0:03:03.981 ***** 2025-11-08 13:52:05.167783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 13:52:05.167791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.167802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 13:52:05.167809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 13:52:05.167816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.167827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.167834 | orchestrator | 2025-11-08 13:52:05.167844 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-11-08 13:52:05.167850 | orchestrator | Saturday 08 November 2025 13:48:31 +0000 (0:00:03.608) 0:03:07.589 ***** 2025-11-08 13:52:05.167857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 13:52:05.167867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.167873 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.167880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 13:52:05.168036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168047 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.168056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 13:52:05.168070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168075 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.168081 | orchestrator | 2025-11-08 13:52:05.168086 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-11-08 13:52:05.168091 | orchestrator | Saturday 08 November 2025 13:48:32 +0000 (0:00:01.186) 0:03:08.776 ***** 2025-11-08 13:52:05.168098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-08 13:52:05.168104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-08 13:52:05.168110 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.168116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-08 13:52:05.168121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-08 13:52:05.168127 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.168132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-08 13:52:05.168138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-08 13:52:05.168143 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.168148 | orchestrator | 2025-11-08 13:52:05.168154 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-11-08 13:52:05.168159 | orchestrator | Saturday 08 November 2025 13:48:33 +0000 (0:00:00.964) 0:03:09.741 ***** 2025-11-08 13:52:05.168164 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.168170 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.168175 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.168180 | orchestrator | 2025-11-08 13:52:05.168186 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-11-08 13:52:05.168191 | orchestrator | Saturday 08 November 2025 13:48:35 +0000 (0:00:01.352) 0:03:11.093 ***** 2025-11-08 13:52:05.168197 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.168202 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.168207 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.168213 | orchestrator | 2025-11-08 13:52:05.168218 | orchestrator | TASK [include_role : manila] *************************************************** 2025-11-08 13:52:05.168224 | orchestrator | Saturday 08 November 2025 13:48:37 +0000 (0:00:02.195) 0:03:13.288 ***** 2025-11-08 13:52:05.168270 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.168281 | orchestrator | 2025-11-08 13:52:05.168287 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-11-08 13:52:05.168292 | orchestrator | Saturday 08 November 2025 13:48:39 +0000 (0:00:01.630) 0:03:14.919 ***** 2025-11-08 13:52:05.168301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-08 13:52:05.168308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-08 13:52:05.168368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-08 13:52:05.168401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168463 | orchestrator | 2025-11-08 13:52:05.168469 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-11-08 13:52:05.168474 | orchestrator | Saturday 08 November 2025 13:48:43 +0000 (0:00:04.033) 0:03:18.953 ***** 2025-11-08 13:52:05.168482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-08 13:52:05.168501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168518 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.168524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-08 13:52:05.168586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168610 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.168616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-08 13:52:05.168622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.168696 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.168701 | orchestrator | 2025-11-08 13:52:05.168707 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-11-08 13:52:05.168712 | orchestrator | Saturday 08 November 2025 13:48:43 +0000 (0:00:00.742) 0:03:19.695 ***** 2025-11-08 13:52:05.168718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-08 13:52:05.168723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-08 13:52:05.168729 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.168738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-08 13:52:05.168743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-08 13:52:05.168749 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.168754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-08 13:52:05.168760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-08 13:52:05.168765 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.168771 | orchestrator | 2025-11-08 13:52:05.168776 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-11-08 13:52:05.168781 | orchestrator | Saturday 08 November 2025 13:48:45 +0000 (0:00:01.213) 0:03:20.909 ***** 2025-11-08 13:52:05.168787 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.168792 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.168798 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.168803 | orchestrator | 2025-11-08 13:52:05.168809 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-11-08 13:52:05.168814 | orchestrator | Saturday 08 November 2025 13:48:46 +0000 (0:00:01.357) 0:03:22.266 ***** 2025-11-08 13:52:05.168819 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.168825 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.168830 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.168835 | orchestrator | 2025-11-08 13:52:05.168841 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-11-08 13:52:05.168850 | orchestrator | Saturday 08 November 2025 13:48:48 +0000 (0:00:02.097) 0:03:24.364 ***** 2025-11-08 13:52:05.168856 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.168861 | orchestrator | 2025-11-08 13:52:05.168867 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-11-08 13:52:05.168872 | orchestrator | Saturday 08 November 2025 13:48:49 +0000 (0:00:01.350) 0:03:25.714 ***** 2025-11-08 13:52:05.168878 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-08 13:52:05.168884 | orchestrator | 2025-11-08 13:52:05.168889 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-11-08 13:52:05.168894 | orchestrator | Saturday 08 November 2025 13:48:52 +0000 (0:00:03.009) 0:03:28.724 ***** 2025-11-08 13:52:05.168940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:52:05.168952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-08 13:52:05.168958 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.168964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:52:05.168973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-08 13:52:05.168979 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.169014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:52:05.169023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-08 13:52:05.169032 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.169037 | orchestrator | 2025-11-08 13:52:05.169043 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-11-08 13:52:05.169048 | orchestrator | Saturday 08 November 2025 13:48:55 +0000 (0:00:02.163) 0:03:30.888 ***** 2025-11-08 13:52:05.169054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:52:05.169086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-08 13:52:05.169094 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.169102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:52:05.169112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-08 13:52:05.169117 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.169149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:52:05.169159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-08 13:52:05.169165 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.169170 | orchestrator | 2025-11-08 13:52:05.169176 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-11-08 13:52:05.169187 | orchestrator | Saturday 08 November 2025 13:48:57 +0000 (0:00:02.431) 0:03:33.319 ***** 2025-11-08 13:52:05.169192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-08 13:52:05.169198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-08 13:52:05.169204 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.169210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-08 13:52:05.169215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-08 13:52:05.169221 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.169263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-08 13:52:05.169276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-08 13:52:05.169282 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.169292 | orchestrator | 2025-11-08 13:52:05.169297 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-11-08 13:52:05.169302 | orchestrator | Saturday 08 November 2025 13:49:00 +0000 (0:00:03.138) 0:03:36.457 ***** 2025-11-08 13:52:05.169308 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.169313 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.169319 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.169324 | orchestrator | 2025-11-08 13:52:05.169329 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-11-08 13:52:05.169335 | orchestrator | Saturday 08 November 2025 13:49:02 +0000 (0:00:01.742) 0:03:38.200 ***** 2025-11-08 13:52:05.169340 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.169346 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.169351 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.169356 | orchestrator | 2025-11-08 13:52:05.169362 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-11-08 13:52:05.169367 | orchestrator | Saturday 08 November 2025 13:49:03 +0000 (0:00:01.569) 0:03:39.770 ***** 2025-11-08 13:52:05.169372 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.169378 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.169383 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.169388 | orchestrator | 2025-11-08 13:52:05.169394 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-11-08 13:52:05.169399 | orchestrator | Saturday 08 November 2025 13:49:04 +0000 (0:00:00.320) 0:03:40.091 ***** 2025-11-08 13:52:05.169404 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.169410 | orchestrator | 2025-11-08 13:52:05.169415 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-11-08 13:52:05.169420 | orchestrator | Saturday 08 November 2025 13:49:05 +0000 (0:00:01.427) 0:03:41.518 ***** 2025-11-08 13:52:05.169426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-08 13:52:05.169432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-08 13:52:05.169478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-08 13:52:05.169504 | orchestrator | 2025-11-08 13:52:05.169510 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-11-08 13:52:05.169515 | orchestrator | Saturday 08 November 2025 13:49:07 +0000 (0:00:01.578) 0:03:43.097 ***** 2025-11-08 13:52:05.169524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-08 13:52:05.169530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-08 13:52:05.169535 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.169541 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.169546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-08 13:52:05.169552 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.169557 | orchestrator | 2025-11-08 13:52:05.169563 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-11-08 13:52:05.169568 | orchestrator | Saturday 08 November 2025 13:49:07 +0000 (0:00:00.383) 0:03:43.480 ***** 2025-11-08 13:52:05.169574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-08 13:52:05.169580 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.169585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-08 13:52:05.169591 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.169635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-08 13:52:05.169647 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.169652 | orchestrator | 2025-11-08 13:52:05.169658 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-11-08 13:52:05.169663 | orchestrator | Saturday 08 November 2025 13:49:08 +0000 (0:00:00.846) 0:03:44.327 ***** 2025-11-08 13:52:05.169668 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.169674 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.169679 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.169684 | orchestrator | 2025-11-08 13:52:05.169690 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-11-08 13:52:05.169695 | orchestrator | Saturday 08 November 2025 13:49:08 +0000 (0:00:00.482) 0:03:44.809 ***** 2025-11-08 13:52:05.169701 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.169706 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.169712 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.169717 | orchestrator | 2025-11-08 13:52:05.169722 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-11-08 13:52:05.169728 | orchestrator | Saturday 08 November 2025 13:49:10 +0000 (0:00:01.346) 0:03:46.155 ***** 2025-11-08 13:52:05.169736 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.169742 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.169747 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.169752 | orchestrator | 2025-11-08 13:52:05.169758 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-11-08 13:52:05.169763 | orchestrator | Saturday 08 November 2025 13:49:10 +0000 (0:00:00.355) 0:03:46.511 ***** 2025-11-08 13:52:05.169769 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.169774 | orchestrator | 2025-11-08 13:52:05.169779 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-11-08 13:52:05.169785 | orchestrator | Saturday 08 November 2025 13:49:12 +0000 (0:00:01.562) 0:03:48.073 ***** 2025-11-08 13:52:05.169790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 13:52:05.169796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.169802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.169858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.169866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-08 13:52:05.169872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.169878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.169884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 13:52:05.169929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.169983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.169994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.170011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-08 13:52:05.170090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.170159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.170173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.170188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.170246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.170256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 13:52:05.170302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-08 13:52:05.170333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.170392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.170453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.170458 | orchestrator | 2025-11-08 13:52:05.170464 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-11-08 13:52:05.170470 | orchestrator | Saturday 08 November 2025 13:49:16 +0000 (0:00:04.619) 0:03:52.693 ***** 2025-11-08 13:52:05.170476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 13:52:05.170504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-08 13:52:05.170558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 13:52:05.170580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.170676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-08 13:52:05.170724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.170815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.170830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.170836 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.170842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.170918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 13:52:05.170933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.170952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.171026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.171054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171060 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.171066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-08 13:52:05.171071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.171083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.171109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.171130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-08 13:52:05.171142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-08 13:52:05.171148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-08 13:52:05.171186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-08 13:52:05.171192 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.171197 | orchestrator | 2025-11-08 13:52:05.171203 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-11-08 13:52:05.171209 | orchestrator | Saturday 08 November 2025 13:49:18 +0000 (0:00:01.625) 0:03:54.318 ***** 2025-11-08 13:52:05.171215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-08 13:52:05.171224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-08 13:52:05.171234 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.171243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-08 13:52:05.171252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-08 13:52:05.171261 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.171271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-08 13:52:05.171278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-08 13:52:05.171284 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.171289 | orchestrator | 2025-11-08 13:52:05.171295 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-11-08 13:52:05.171300 | orchestrator | Saturday 08 November 2025 13:49:20 +0000 (0:00:02.065) 0:03:56.383 ***** 2025-11-08 13:52:05.171305 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.171311 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.171316 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.171321 | orchestrator | 2025-11-08 13:52:05.171327 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-11-08 13:52:05.171332 | orchestrator | Saturday 08 November 2025 13:49:21 +0000 (0:00:01.402) 0:03:57.786 ***** 2025-11-08 13:52:05.171337 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.171343 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.171348 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.171353 | orchestrator | 2025-11-08 13:52:05.171359 | orchestrator | TASK [include_role : placement] ************************************************ 2025-11-08 13:52:05.171364 | orchestrator | Saturday 08 November 2025 13:49:24 +0000 (0:00:02.263) 0:04:00.050 ***** 2025-11-08 13:52:05.171370 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.171380 | orchestrator | 2025-11-08 13:52:05.171385 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-11-08 13:52:05.171390 | orchestrator | Saturday 08 November 2025 13:49:25 +0000 (0:00:01.243) 0:04:01.294 ***** 2025-11-08 13:52:05.171415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.171425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.171431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.171437 | orchestrator | 2025-11-08 13:52:05.171442 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-11-08 13:52:05.171448 | orchestrator | Saturday 08 November 2025 13:49:29 +0000 (0:00:03.773) 0:04:05.067 ***** 2025-11-08 13:52:05.171454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.171464 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.171528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.171538 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.171548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.171553 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.171559 | orchestrator | 2025-11-08 13:52:05.171564 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-11-08 13:52:05.171570 | orchestrator | Saturday 08 November 2025 13:49:29 +0000 (0:00:00.552) 0:04:05.619 ***** 2025-11-08 13:52:05.171575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-08 13:52:05.171582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-08 13:52:05.171588 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.171593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-08 13:52:05.171599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-08 13:52:05.171604 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.171610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-08 13:52:05.171615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-08 13:52:05.171625 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.171631 | orchestrator | 2025-11-08 13:52:05.171636 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-11-08 13:52:05.171643 | orchestrator | Saturday 08 November 2025 13:49:30 +0000 (0:00:00.867) 0:04:06.487 ***** 2025-11-08 13:52:05.171649 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.171655 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.171661 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.171667 | orchestrator | 2025-11-08 13:52:05.171673 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-11-08 13:52:05.171680 | orchestrator | Saturday 08 November 2025 13:49:32 +0000 (0:00:01.984) 0:04:08.472 ***** 2025-11-08 13:52:05.171686 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.171692 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.171698 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.171704 | orchestrator | 2025-11-08 13:52:05.171710 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-11-08 13:52:05.171716 | orchestrator | Saturday 08 November 2025 13:49:34 +0000 (0:00:01.713) 0:04:10.185 ***** 2025-11-08 13:52:05.171722 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.171729 | orchestrator | 2025-11-08 13:52:05.171735 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-11-08 13:52:05.171741 | orchestrator | Saturday 08 November 2025 13:49:35 +0000 (0:00:01.540) 0:04:11.726 ***** 2025-11-08 13:52:05.171771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.171779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.171805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.171839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171863 | orchestrator | 2025-11-08 13:52:05.171870 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-11-08 13:52:05.171876 | orchestrator | Saturday 08 November 2025 13:49:40 +0000 (0:00:04.278) 0:04:16.005 ***** 2025-11-08 13:52:05.171898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.171905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171920 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.171926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.171936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171947 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.171972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.171979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.171996 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.172001 | orchestrator | 2025-11-08 13:52:05.172006 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-11-08 13:52:05.172010 | orchestrator | Saturday 08 November 2025 13:49:41 +0000 (0:00:01.262) 0:04:17.267 ***** 2025-11-08 13:52:05.172016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172036 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.172041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172074 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.172079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-08 13:52:05.172108 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.172113 | orchestrator | 2025-11-08 13:52:05.172118 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-11-08 13:52:05.172123 | orchestrator | Saturday 08 November 2025 13:49:42 +0000 (0:00:00.909) 0:04:18.177 ***** 2025-11-08 13:52:05.172127 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.172132 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.172137 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.172142 | orchestrator | 2025-11-08 13:52:05.172147 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-11-08 13:52:05.172152 | orchestrator | Saturday 08 November 2025 13:49:43 +0000 (0:00:01.407) 0:04:19.585 ***** 2025-11-08 13:52:05.172156 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.172161 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.172166 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.172171 | orchestrator | 2025-11-08 13:52:05.172175 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-11-08 13:52:05.172180 | orchestrator | Saturday 08 November 2025 13:49:45 +0000 (0:00:02.160) 0:04:21.745 ***** 2025-11-08 13:52:05.172185 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.172190 | orchestrator | 2025-11-08 13:52:05.172195 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-11-08 13:52:05.172199 | orchestrator | Saturday 08 November 2025 13:49:47 +0000 (0:00:01.618) 0:04:23.364 ***** 2025-11-08 13:52:05.172204 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-11-08 13:52:05.172210 | orchestrator | 2025-11-08 13:52:05.172214 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-11-08 13:52:05.172220 | orchestrator | Saturday 08 November 2025 13:49:48 +0000 (0:00:00.913) 0:04:24.278 ***** 2025-11-08 13:52:05.172228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-08 13:52:05.172237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-08 13:52:05.172245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-08 13:52:05.172253 | orchestrator | 2025-11-08 13:52:05.172261 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-11-08 13:52:05.172269 | orchestrator | Saturday 08 November 2025 13:49:53 +0000 (0:00:04.795) 0:04:29.073 ***** 2025-11-08 13:52:05.172299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-08 13:52:05.172311 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.172320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-08 13:52:05.172325 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.172330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-08 13:52:05.172335 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.172340 | orchestrator | 2025-11-08 13:52:05.172345 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-11-08 13:52:05.172350 | orchestrator | Saturday 08 November 2025 13:49:54 +0000 (0:00:01.105) 0:04:30.179 ***** 2025-11-08 13:52:05.172355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-08 13:52:05.172360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-08 13:52:05.172365 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.172370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-08 13:52:05.172375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-08 13:52:05.172380 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.172385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-08 13:52:05.172390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-08 13:52:05.172395 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.172400 | orchestrator | 2025-11-08 13:52:05.172404 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-08 13:52:05.172409 | orchestrator | Saturday 08 November 2025 13:49:55 +0000 (0:00:01.629) 0:04:31.808 ***** 2025-11-08 13:52:05.172414 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.172419 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.172428 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.172433 | orchestrator | 2025-11-08 13:52:05.172438 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-08 13:52:05.172442 | orchestrator | Saturday 08 November 2025 13:49:58 +0000 (0:00:02.562) 0:04:34.370 ***** 2025-11-08 13:52:05.172447 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.172452 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.172457 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.172461 | orchestrator | 2025-11-08 13:52:05.172466 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-11-08 13:52:05.172471 | orchestrator | Saturday 08 November 2025 13:50:01 +0000 (0:00:03.009) 0:04:37.379 ***** 2025-11-08 13:52:05.172510 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-11-08 13:52:05.172519 | orchestrator | 2025-11-08 13:52:05.172527 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-11-08 13:52:05.172535 | orchestrator | Saturday 08 November 2025 13:50:02 +0000 (0:00:01.434) 0:04:38.814 ***** 2025-11-08 13:52:05.172542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-08 13:52:05.172550 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.172563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-08 13:52:05.172572 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.172579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-08 13:52:05.172587 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.172596 | orchestrator | 2025-11-08 13:52:05.172601 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-11-08 13:52:05.172606 | orchestrator | Saturday 08 November 2025 13:50:04 +0000 (0:00:01.213) 0:04:40.027 ***** 2025-11-08 13:52:05.172611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-08 13:52:05.172616 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.172621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-08 13:52:05.172631 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.172636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-08 13:52:05.172641 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.172646 | orchestrator | 2025-11-08 13:52:05.172651 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-11-08 13:52:05.172656 | orchestrator | Saturday 08 November 2025 13:50:05 +0000 (0:00:01.286) 0:04:41.313 ***** 2025-11-08 13:52:05.172660 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.172665 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.172670 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.172675 | orchestrator | 2025-11-08 13:52:05.172698 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-08 13:52:05.172704 | orchestrator | Saturday 08 November 2025 13:50:07 +0000 (0:00:01.833) 0:04:43.147 ***** 2025-11-08 13:52:05.172708 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.172714 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.172719 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.172724 | orchestrator | 2025-11-08 13:52:05.172728 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-08 13:52:05.172733 | orchestrator | Saturday 08 November 2025 13:50:09 +0000 (0:00:02.329) 0:04:45.476 ***** 2025-11-08 13:52:05.172738 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.172743 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.172748 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.172753 | orchestrator | 2025-11-08 13:52:05.172757 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-11-08 13:52:05.172762 | orchestrator | Saturday 08 November 2025 13:50:12 +0000 (0:00:03.037) 0:04:48.513 ***** 2025-11-08 13:52:05.172770 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-11-08 13:52:05.172775 | orchestrator | 2025-11-08 13:52:05.172780 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-11-08 13:52:05.172785 | orchestrator | Saturday 08 November 2025 13:50:13 +0000 (0:00:00.863) 0:04:49.377 ***** 2025-11-08 13:52:05.172790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-08 13:52:05.172795 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.172800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-08 13:52:05.172809 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.172814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-08 13:52:05.172819 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.172824 | orchestrator | 2025-11-08 13:52:05.172828 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-11-08 13:52:05.172833 | orchestrator | Saturday 08 November 2025 13:50:14 +0000 (0:00:01.354) 0:04:50.732 ***** 2025-11-08 13:52:05.172838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-08 13:52:05.172843 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.172848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-08 13:52:05.172853 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.172873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-08 13:52:05.172878 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.172883 | orchestrator | 2025-11-08 13:52:05.172888 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-11-08 13:52:05.172893 | orchestrator | Saturday 08 November 2025 13:50:16 +0000 (0:00:01.362) 0:04:52.094 ***** 2025-11-08 13:52:05.172898 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.172903 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.172907 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.172912 | orchestrator | 2025-11-08 13:52:05.172920 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-08 13:52:05.172925 | orchestrator | Saturday 08 November 2025 13:50:17 +0000 (0:00:01.513) 0:04:53.608 ***** 2025-11-08 13:52:05.172930 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.172934 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.172939 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.172944 | orchestrator | 2025-11-08 13:52:05.172949 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-08 13:52:05.172958 | orchestrator | Saturday 08 November 2025 13:50:20 +0000 (0:00:02.421) 0:04:56.029 ***** 2025-11-08 13:52:05.172963 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.172968 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.172972 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.172977 | orchestrator | 2025-11-08 13:52:05.172982 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-11-08 13:52:05.172987 | orchestrator | Saturday 08 November 2025 13:50:23 +0000 (0:00:03.310) 0:04:59.340 ***** 2025-11-08 13:52:05.172992 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.172996 | orchestrator | 2025-11-08 13:52:05.173001 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-11-08 13:52:05.173006 | orchestrator | Saturday 08 November 2025 13:50:25 +0000 (0:00:01.582) 0:05:00.922 ***** 2025-11-08 13:52:05.173011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.173016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 13:52:05.173036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.173042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 13:52:05.173059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.173074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.173115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.173125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 13:52:05.173130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.173145 | orchestrator | 2025-11-08 13:52:05.173150 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-11-08 13:52:05.173154 | orchestrator | Saturday 08 November 2025 13:50:28 +0000 (0:00:03.556) 0:05:04.479 ***** 2025-11-08 13:52:05.173174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.173186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 13:52:05.173195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.173210 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.173215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.173234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 13:52:05.173243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.173261 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.173266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.173271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 13:52:05.173277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 13:52:05.173308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 13:52:05.173313 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.173318 | orchestrator | 2025-11-08 13:52:05.173323 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-11-08 13:52:05.173328 | orchestrator | Saturday 08 November 2025 13:50:29 +0000 (0:00:00.725) 0:05:05.205 ***** 2025-11-08 13:52:05.173333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-08 13:52:05.173338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-08 13:52:05.173343 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.173348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-08 13:52:05.173353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-08 13:52:05.173358 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.173362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-08 13:52:05.173367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-08 13:52:05.173372 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.173377 | orchestrator | 2025-11-08 13:52:05.173382 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-11-08 13:52:05.173386 | orchestrator | Saturday 08 November 2025 13:50:30 +0000 (0:00:01.595) 0:05:06.800 ***** 2025-11-08 13:52:05.173391 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.173396 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.173401 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.173405 | orchestrator | 2025-11-08 13:52:05.173410 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-11-08 13:52:05.173415 | orchestrator | Saturday 08 November 2025 13:50:32 +0000 (0:00:01.538) 0:05:08.339 ***** 2025-11-08 13:52:05.173420 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.173424 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.173433 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.173438 | orchestrator | 2025-11-08 13:52:05.173443 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-11-08 13:52:05.173447 | orchestrator | Saturday 08 November 2025 13:50:34 +0000 (0:00:02.195) 0:05:10.534 ***** 2025-11-08 13:52:05.173452 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.173457 | orchestrator | 2025-11-08 13:52:05.173462 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-11-08 13:52:05.173466 | orchestrator | Saturday 08 November 2025 13:50:36 +0000 (0:00:01.476) 0:05:12.011 ***** 2025-11-08 13:52:05.173500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:52:05.173510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:52:05.173515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:52:05.173521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:52:05.173544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:52:05.173554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:52:05.173559 | orchestrator | 2025-11-08 13:52:05.173564 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-11-08 13:52:05.173569 | orchestrator | Saturday 08 November 2025 13:50:41 +0000 (0:00:05.742) 0:05:17.753 ***** 2025-11-08 13:52:05.173574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-08 13:52:05.173580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-08 13:52:05.173588 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.173593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-08 13:52:05.173615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-08 13:52:05.173621 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.173626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-08 13:52:05.173631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-08 13:52:05.173640 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.173645 | orchestrator | 2025-11-08 13:52:05.173650 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-11-08 13:52:05.173654 | orchestrator | Saturday 08 November 2025 13:50:42 +0000 (0:00:00.751) 0:05:18.505 ***** 2025-11-08 13:52:05.173659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-08 13:52:05.173664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-08 13:52:05.173669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-08 13:52:05.173674 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.173679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-08 13:52:05.173705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-08 13:52:05.173711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-08 13:52:05.173716 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.173721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-08 13:52:05.173729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-08 13:52:05.173734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-08 13:52:05.173746 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.173752 | orchestrator | 2025-11-08 13:52:05.173756 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-11-08 13:52:05.173761 | orchestrator | Saturday 08 November 2025 13:50:43 +0000 (0:00:00.917) 0:05:19.422 ***** 2025-11-08 13:52:05.173766 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.173771 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.173776 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.173781 | orchestrator | 2025-11-08 13:52:05.173785 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-11-08 13:52:05.173790 | orchestrator | Saturday 08 November 2025 13:50:44 +0000 (0:00:00.834) 0:05:20.256 ***** 2025-11-08 13:52:05.173799 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.173803 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.173808 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.173813 | orchestrator | 2025-11-08 13:52:05.173818 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-11-08 13:52:05.173822 | orchestrator | Saturday 08 November 2025 13:50:45 +0000 (0:00:01.360) 0:05:21.617 ***** 2025-11-08 13:52:05.173827 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.173832 | orchestrator | 2025-11-08 13:52:05.173837 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-11-08 13:52:05.173842 | orchestrator | Saturday 08 November 2025 13:50:47 +0000 (0:00:01.434) 0:05:23.051 ***** 2025-11-08 13:52:05.173847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-08 13:52:05.173852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 13:52:05.173857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.173876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.173887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.173893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-08 13:52:05.173901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 13:52:05.173907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.173912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.173917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-08 13:52:05.173936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.173945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 13:52:05.173951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.173959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.173964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.173969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-08 13:52:05.173977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-08 13:52:05.173983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.173990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.173999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.174005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-08 13:52:05.174010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-08 13:52:05.174039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.174065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-08 13:52:05.174070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-08 13:52:05.174075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.174093 | orchestrator | 2025-11-08 13:52:05.174098 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-11-08 13:52:05.174103 | orchestrator | Saturday 08 November 2025 13:50:51 +0000 (0:00:04.362) 0:05:27.414 ***** 2025-11-08 13:52:05.174111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-08 13:52:05.174120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 13:52:05.174125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.174143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-08 13:52:05.174152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-08 13:52:05.174160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.174175 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.174181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-08 13:52:05.174186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 13:52:05.174193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.174216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-08 13:52:05.174221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-08 13:52:05.174226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-08 13:52:05.174233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 13:52:05.174247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.174272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.174277 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.174284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-08 13:52:05.174296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-08 13:52:05.174301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 13:52:05.174311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 13:52:05.174316 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.174321 | orchestrator | 2025-11-08 13:52:05.174326 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-11-08 13:52:05.174331 | orchestrator | Saturday 08 November 2025 13:50:52 +0000 (0:00:01.252) 0:05:28.666 ***** 2025-11-08 13:52:05.174336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-08 13:52:05.174341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-08 13:52:05.174346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-08 13:52:05.174356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-08 13:52:05.174361 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.174366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-08 13:52:05.174373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-08 13:52:05.174379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-08 13:52:05.174387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-08 13:52:05.174392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-08 13:52:05.174397 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.174402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-08 13:52:05.174407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-08 13:52:05.174412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-08 13:52:05.174417 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.174422 | orchestrator | 2025-11-08 13:52:05.174426 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-11-08 13:52:05.174431 | orchestrator | Saturday 08 November 2025 13:50:53 +0000 (0:00:00.999) 0:05:29.666 ***** 2025-11-08 13:52:05.174436 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.174441 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.174446 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.174450 | orchestrator | 2025-11-08 13:52:05.174455 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-11-08 13:52:05.174460 | orchestrator | Saturday 08 November 2025 13:50:54 +0000 (0:00:00.457) 0:05:30.123 ***** 2025-11-08 13:52:05.174465 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.174470 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.174474 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.174479 | orchestrator | 2025-11-08 13:52:05.174484 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-11-08 13:52:05.174500 | orchestrator | Saturday 08 November 2025 13:50:55 +0000 (0:00:01.438) 0:05:31.562 ***** 2025-11-08 13:52:05.174515 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.174520 | orchestrator | 2025-11-08 13:52:05.174525 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-11-08 13:52:05.174530 | orchestrator | Saturday 08 November 2025 13:50:57 +0000 (0:00:01.877) 0:05:33.440 ***** 2025-11-08 13:52:05.174535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:52:05.174543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:52:05.174552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-08 13:52:05.174557 | orchestrator | 2025-11-08 13:52:05.174562 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-11-08 13:52:05.174567 | orchestrator | Saturday 08 November 2025 13:51:00 +0000 (0:00:02.439) 0:05:35.879 ***** 2025-11-08 13:52:05.174572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-08 13:52:05.174582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-08 13:52:05.174588 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.174592 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.174603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-08 13:52:05.174609 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.174613 | orchestrator | 2025-11-08 13:52:05.174618 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-11-08 13:52:05.174623 | orchestrator | Saturday 08 November 2025 13:51:00 +0000 (0:00:00.477) 0:05:36.357 ***** 2025-11-08 13:52:05.174628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-08 13:52:05.174633 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.174638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-08 13:52:05.174643 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.174648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-08 13:52:05.174652 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.174657 | orchestrator | 2025-11-08 13:52:05.174662 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-11-08 13:52:05.174667 | orchestrator | Saturday 08 November 2025 13:51:01 +0000 (0:00:01.077) 0:05:37.435 ***** 2025-11-08 13:52:05.174675 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.174680 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.174685 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.174690 | orchestrator | 2025-11-08 13:52:05.174694 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-11-08 13:52:05.174699 | orchestrator | Saturday 08 November 2025 13:51:02 +0000 (0:00:00.472) 0:05:37.907 ***** 2025-11-08 13:52:05.174704 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.174709 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.174713 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.174718 | orchestrator | 2025-11-08 13:52:05.174723 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-11-08 13:52:05.174728 | orchestrator | Saturday 08 November 2025 13:51:03 +0000 (0:00:01.331) 0:05:39.238 ***** 2025-11-08 13:52:05.174733 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:52:05.174737 | orchestrator | 2025-11-08 13:52:05.174742 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-11-08 13:52:05.174747 | orchestrator | Saturday 08 November 2025 13:51:05 +0000 (0:00:01.732) 0:05:40.971 ***** 2025-11-08 13:52:05.174752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.174760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.174768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.174777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.174783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.174788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-08 13:52:05.174793 | orchestrator | 2025-11-08 13:52:05.174801 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-11-08 13:52:05.174806 | orchestrator | Saturday 08 November 2025 13:51:11 +0000 (0:00:06.261) 0:05:47.233 ***** 2025-11-08 13:52:05.174818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.174827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.174832 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.174837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.174842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.174847 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.174855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.174863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-08 13:52:05.174873 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.174878 | orchestrator | 2025-11-08 13:52:05.174883 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-11-08 13:52:05.174888 | orchestrator | Saturday 08 November 2025 13:51:12 +0000 (0:00:00.622) 0:05:47.856 ***** 2025-11-08 13:52:05.174893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174913 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.174918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174937 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.174942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-08 13:52:05.174967 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.174972 | orchestrator | 2025-11-08 13:52:05.174977 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-11-08 13:52:05.174982 | orchestrator | Saturday 08 November 2025 13:51:13 +0000 (0:00:01.653) 0:05:49.510 ***** 2025-11-08 13:52:05.174987 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.174994 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.174999 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.175004 | orchestrator | 2025-11-08 13:52:05.175009 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-11-08 13:52:05.175014 | orchestrator | Saturday 08 November 2025 13:51:15 +0000 (0:00:01.374) 0:05:50.884 ***** 2025-11-08 13:52:05.175018 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.175023 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.175028 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.175033 | orchestrator | 2025-11-08 13:52:05.175038 | orchestrator | TASK [include_role : swift] **************************************************** 2025-11-08 13:52:05.175042 | orchestrator | Saturday 08 November 2025 13:51:17 +0000 (0:00:02.265) 0:05:53.150 ***** 2025-11-08 13:52:05.175047 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175052 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175057 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175062 | orchestrator | 2025-11-08 13:52:05.175066 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-11-08 13:52:05.175071 | orchestrator | Saturday 08 November 2025 13:51:17 +0000 (0:00:00.352) 0:05:53.502 ***** 2025-11-08 13:52:05.175076 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175081 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175085 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175090 | orchestrator | 2025-11-08 13:52:05.175095 | orchestrator | TASK [include_role : trove] **************************************************** 2025-11-08 13:52:05.175100 | orchestrator | Saturday 08 November 2025 13:51:17 +0000 (0:00:00.302) 0:05:53.805 ***** 2025-11-08 13:52:05.175105 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175109 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175114 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175119 | orchestrator | 2025-11-08 13:52:05.175124 | orchestrator | TASK [include_role : venus] **************************************************** 2025-11-08 13:52:05.175129 | orchestrator | Saturday 08 November 2025 13:51:18 +0000 (0:00:00.668) 0:05:54.474 ***** 2025-11-08 13:52:05.175133 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175138 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175143 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175148 | orchestrator | 2025-11-08 13:52:05.175152 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-11-08 13:52:05.175157 | orchestrator | Saturday 08 November 2025 13:51:18 +0000 (0:00:00.337) 0:05:54.811 ***** 2025-11-08 13:52:05.175162 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175167 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175171 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175176 | orchestrator | 2025-11-08 13:52:05.175181 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-11-08 13:52:05.175186 | orchestrator | Saturday 08 November 2025 13:51:19 +0000 (0:00:00.318) 0:05:55.129 ***** 2025-11-08 13:52:05.175191 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175195 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175200 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175205 | orchestrator | 2025-11-08 13:52:05.175210 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-11-08 13:52:05.175214 | orchestrator | Saturday 08 November 2025 13:51:20 +0000 (0:00:00.828) 0:05:55.958 ***** 2025-11-08 13:52:05.175223 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.175228 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.175233 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.175237 | orchestrator | 2025-11-08 13:52:05.175242 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-11-08 13:52:05.175247 | orchestrator | Saturday 08 November 2025 13:51:20 +0000 (0:00:00.712) 0:05:56.670 ***** 2025-11-08 13:52:05.175252 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.175256 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.175261 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.175266 | orchestrator | 2025-11-08 13:52:05.175271 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-11-08 13:52:05.175276 | orchestrator | Saturday 08 November 2025 13:51:21 +0000 (0:00:00.376) 0:05:57.046 ***** 2025-11-08 13:52:05.175280 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.175285 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.175290 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.175295 | orchestrator | 2025-11-08 13:52:05.175299 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-11-08 13:52:05.175304 | orchestrator | Saturday 08 November 2025 13:51:22 +0000 (0:00:00.832) 0:05:57.879 ***** 2025-11-08 13:52:05.175309 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.175314 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.175319 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.175323 | orchestrator | 2025-11-08 13:52:05.175328 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-11-08 13:52:05.175333 | orchestrator | Saturday 08 November 2025 13:51:23 +0000 (0:00:01.176) 0:05:59.056 ***** 2025-11-08 13:52:05.175338 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.175342 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.175349 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.175354 | orchestrator | 2025-11-08 13:52:05.175359 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-11-08 13:52:05.175364 | orchestrator | Saturday 08 November 2025 13:51:24 +0000 (0:00:00.841) 0:05:59.897 ***** 2025-11-08 13:52:05.175369 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.175374 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.175379 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.175383 | orchestrator | 2025-11-08 13:52:05.175388 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-11-08 13:52:05.175393 | orchestrator | Saturday 08 November 2025 13:51:28 +0000 (0:00:04.768) 0:06:04.665 ***** 2025-11-08 13:52:05.175398 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.175403 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.175407 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.175412 | orchestrator | 2025-11-08 13:52:05.175417 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-11-08 13:52:05.175422 | orchestrator | Saturday 08 November 2025 13:51:32 +0000 (0:00:03.687) 0:06:08.353 ***** 2025-11-08 13:52:05.175427 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.175431 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.175436 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.175441 | orchestrator | 2025-11-08 13:52:05.175448 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-11-08 13:52:05.175453 | orchestrator | Saturday 08 November 2025 13:51:41 +0000 (0:00:09.283) 0:06:17.636 ***** 2025-11-08 13:52:05.175458 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.175463 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.175468 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.175472 | orchestrator | 2025-11-08 13:52:05.175477 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-11-08 13:52:05.175482 | orchestrator | Saturday 08 November 2025 13:51:46 +0000 (0:00:05.006) 0:06:22.643 ***** 2025-11-08 13:52:05.175521 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:52:05.175527 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:52:05.175536 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:52:05.175541 | orchestrator | 2025-11-08 13:52:05.175546 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-11-08 13:52:05.175551 | orchestrator | Saturday 08 November 2025 13:51:56 +0000 (0:00:09.505) 0:06:32.148 ***** 2025-11-08 13:52:05.175555 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175560 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175565 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175570 | orchestrator | 2025-11-08 13:52:05.175575 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-11-08 13:52:05.175579 | orchestrator | Saturday 08 November 2025 13:51:56 +0000 (0:00:00.359) 0:06:32.508 ***** 2025-11-08 13:52:05.175584 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175589 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175594 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175599 | orchestrator | 2025-11-08 13:52:05.175603 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-11-08 13:52:05.175608 | orchestrator | Saturday 08 November 2025 13:51:57 +0000 (0:00:00.373) 0:06:32.882 ***** 2025-11-08 13:52:05.175613 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175618 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175623 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175627 | orchestrator | 2025-11-08 13:52:05.175632 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-11-08 13:52:05.175637 | orchestrator | Saturday 08 November 2025 13:51:57 +0000 (0:00:00.691) 0:06:33.574 ***** 2025-11-08 13:52:05.175642 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175646 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175651 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175656 | orchestrator | 2025-11-08 13:52:05.175661 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-11-08 13:52:05.175666 | orchestrator | Saturday 08 November 2025 13:51:58 +0000 (0:00:00.376) 0:06:33.950 ***** 2025-11-08 13:52:05.175670 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175676 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175680 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175685 | orchestrator | 2025-11-08 13:52:05.175690 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-11-08 13:52:05.175695 | orchestrator | Saturday 08 November 2025 13:51:58 +0000 (0:00:00.364) 0:06:34.315 ***** 2025-11-08 13:52:05.175700 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:52:05.175704 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:52:05.175709 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:52:05.175714 | orchestrator | 2025-11-08 13:52:05.175719 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-11-08 13:52:05.175724 | orchestrator | Saturday 08 November 2025 13:51:58 +0000 (0:00:00.416) 0:06:34.732 ***** 2025-11-08 13:52:05.175728 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.175733 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.175738 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.175743 | orchestrator | 2025-11-08 13:52:05.175747 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-11-08 13:52:05.175752 | orchestrator | Saturday 08 November 2025 13:52:00 +0000 (0:00:01.383) 0:06:36.116 ***** 2025-11-08 13:52:05.175757 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:52:05.175762 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:52:05.175767 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:52:05.175771 | orchestrator | 2025-11-08 13:52:05.175776 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:52:05.175781 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-08 13:52:05.175787 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-08 13:52:05.175795 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-08 13:52:05.175800 | orchestrator | 2025-11-08 13:52:05.175805 | orchestrator | 2025-11-08 13:52:05.175812 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:52:05.175817 | orchestrator | Saturday 08 November 2025 13:52:01 +0000 (0:00:00.932) 0:06:37.048 ***** 2025-11-08 13:52:05.175822 | orchestrator | =============================================================================== 2025-11-08 13:52:05.175827 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.51s 2025-11-08 13:52:05.175832 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.28s 2025-11-08 13:52:05.175836 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.26s 2025-11-08 13:52:05.175841 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.74s 2025-11-08 13:52:05.175846 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.51s 2025-11-08 13:52:05.175851 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 5.01s 2025-11-08 13:52:05.175855 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.80s 2025-11-08 13:52:05.175863 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.77s 2025-11-08 13:52:05.175868 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.70s 2025-11-08 13:52:05.175873 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.62s 2025-11-08 13:52:05.175877 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.53s 2025-11-08 13:52:05.175882 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.50s 2025-11-08 13:52:05.175887 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.36s 2025-11-08 13:52:05.175891 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.29s 2025-11-08 13:52:05.175896 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.28s 2025-11-08 13:52:05.175901 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.28s 2025-11-08 13:52:05.175906 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.19s 2025-11-08 13:52:05.175910 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.03s 2025-11-08 13:52:05.175915 | orchestrator | haproxy-config : Configuring firewall for ceph-rgw ---------------------- 4.02s 2025-11-08 13:52:05.175920 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.77s 2025-11-08 13:52:05.175925 | orchestrator | 2025-11-08 13:52:05 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:05.175930 | orchestrator | 2025-11-08 13:52:05 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:05.175935 | orchestrator | 2025-11-08 13:52:05 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:08.203732 | orchestrator | 2025-11-08 13:52:08 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:08.209269 | orchestrator | 2025-11-08 13:52:08 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:08.211617 | orchestrator | 2025-11-08 13:52:08 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:08.211646 | orchestrator | 2025-11-08 13:52:08 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:11.260743 | orchestrator | 2025-11-08 13:52:11 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:11.262393 | orchestrator | 2025-11-08 13:52:11 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:11.262944 | orchestrator | 2025-11-08 13:52:11 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:11.263952 | orchestrator | 2025-11-08 13:52:11 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:14.305989 | orchestrator | 2025-11-08 13:52:14 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:14.306631 | orchestrator | 2025-11-08 13:52:14 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:14.307354 | orchestrator | 2025-11-08 13:52:14 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:14.307502 | orchestrator | 2025-11-08 13:52:14 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:17.348062 | orchestrator | 2025-11-08 13:52:17 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:17.350886 | orchestrator | 2025-11-08 13:52:17 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:17.354283 | orchestrator | 2025-11-08 13:52:17 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:17.354352 | orchestrator | 2025-11-08 13:52:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:20.401813 | orchestrator | 2025-11-08 13:52:20 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:20.402267 | orchestrator | 2025-11-08 13:52:20 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:20.403109 | orchestrator | 2025-11-08 13:52:20 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:20.403138 | orchestrator | 2025-11-08 13:52:20 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:23.426781 | orchestrator | 2025-11-08 13:52:23 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:23.428946 | orchestrator | 2025-11-08 13:52:23 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:23.429883 | orchestrator | 2025-11-08 13:52:23 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:23.429925 | orchestrator | 2025-11-08 13:52:23 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:26.473537 | orchestrator | 2025-11-08 13:52:26 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:26.474185 | orchestrator | 2025-11-08 13:52:26 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:26.475220 | orchestrator | 2025-11-08 13:52:26 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:26.475860 | orchestrator | 2025-11-08 13:52:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:29.509560 | orchestrator | 2025-11-08 13:52:29 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:29.509886 | orchestrator | 2025-11-08 13:52:29 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:29.510941 | orchestrator | 2025-11-08 13:52:29 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:29.510982 | orchestrator | 2025-11-08 13:52:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:32.561573 | orchestrator | 2025-11-08 13:52:32 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:32.563517 | orchestrator | 2025-11-08 13:52:32 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:32.566344 | orchestrator | 2025-11-08 13:52:32 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:32.566425 | orchestrator | 2025-11-08 13:52:32 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:35.615424 | orchestrator | 2025-11-08 13:52:35 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:35.616145 | orchestrator | 2025-11-08 13:52:35 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:35.617224 | orchestrator | 2025-11-08 13:52:35 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:35.617275 | orchestrator | 2025-11-08 13:52:35 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:38.669103 | orchestrator | 2025-11-08 13:52:38 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:38.669892 | orchestrator | 2025-11-08 13:52:38 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:38.672605 | orchestrator | 2025-11-08 13:52:38 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:38.672650 | orchestrator | 2025-11-08 13:52:38 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:41.723917 | orchestrator | 2025-11-08 13:52:41 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:41.724788 | orchestrator | 2025-11-08 13:52:41 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:41.725038 | orchestrator | 2025-11-08 13:52:41 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:41.725057 | orchestrator | 2025-11-08 13:52:41 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:44.803326 | orchestrator | 2025-11-08 13:52:44 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:44.804692 | orchestrator | 2025-11-08 13:52:44 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:44.806736 | orchestrator | 2025-11-08 13:52:44 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:44.807002 | orchestrator | 2025-11-08 13:52:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:47.847496 | orchestrator | 2025-11-08 13:52:47 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:47.848057 | orchestrator | 2025-11-08 13:52:47 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:47.850932 | orchestrator | 2025-11-08 13:52:47 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:47.851012 | orchestrator | 2025-11-08 13:52:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:50.889867 | orchestrator | 2025-11-08 13:52:50 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:50.891823 | orchestrator | 2025-11-08 13:52:50 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:50.894774 | orchestrator | 2025-11-08 13:52:50 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:50.895174 | orchestrator | 2025-11-08 13:52:50 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:53.947506 | orchestrator | 2025-11-08 13:52:53 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:53.949147 | orchestrator | 2025-11-08 13:52:53 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:53.951405 | orchestrator | 2025-11-08 13:52:53 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:53.952202 | orchestrator | 2025-11-08 13:52:53 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:52:56.992694 | orchestrator | 2025-11-08 13:52:56 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:52:56.993645 | orchestrator | 2025-11-08 13:52:56 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:52:56.995860 | orchestrator | 2025-11-08 13:52:56 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:52:56.996127 | orchestrator | 2025-11-08 13:52:56 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:00.048562 | orchestrator | 2025-11-08 13:53:00 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:00.049906 | orchestrator | 2025-11-08 13:53:00 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:00.051736 | orchestrator | 2025-11-08 13:53:00 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:00.051780 | orchestrator | 2025-11-08 13:53:00 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:03.098294 | orchestrator | 2025-11-08 13:53:03 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:03.098898 | orchestrator | 2025-11-08 13:53:03 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:03.100583 | orchestrator | 2025-11-08 13:53:03 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:03.101127 | orchestrator | 2025-11-08 13:53:03 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:06.148345 | orchestrator | 2025-11-08 13:53:06 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:06.151494 | orchestrator | 2025-11-08 13:53:06 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:06.153740 | orchestrator | 2025-11-08 13:53:06 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:06.154289 | orchestrator | 2025-11-08 13:53:06 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:09.196502 | orchestrator | 2025-11-08 13:53:09 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:09.196641 | orchestrator | 2025-11-08 13:53:09 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:09.198941 | orchestrator | 2025-11-08 13:53:09 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:09.198997 | orchestrator | 2025-11-08 13:53:09 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:12.245706 | orchestrator | 2025-11-08 13:53:12 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:12.247814 | orchestrator | 2025-11-08 13:53:12 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:12.249992 | orchestrator | 2025-11-08 13:53:12 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:12.250084 | orchestrator | 2025-11-08 13:53:12 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:15.300986 | orchestrator | 2025-11-08 13:53:15 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:15.303554 | orchestrator | 2025-11-08 13:53:15 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:15.305915 | orchestrator | 2025-11-08 13:53:15 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:15.305979 | orchestrator | 2025-11-08 13:53:15 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:18.362301 | orchestrator | 2025-11-08 13:53:18 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:18.365441 | orchestrator | 2025-11-08 13:53:18 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:18.367017 | orchestrator | 2025-11-08 13:53:18 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:18.367308 | orchestrator | 2025-11-08 13:53:18 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:21.426636 | orchestrator | 2025-11-08 13:53:21 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:21.428717 | orchestrator | 2025-11-08 13:53:21 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:21.431977 | orchestrator | 2025-11-08 13:53:21 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:21.432030 | orchestrator | 2025-11-08 13:53:21 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:24.488817 | orchestrator | 2025-11-08 13:53:24 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:24.491099 | orchestrator | 2025-11-08 13:53:24 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:24.493952 | orchestrator | 2025-11-08 13:53:24 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:24.494308 | orchestrator | 2025-11-08 13:53:24 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:27.547149 | orchestrator | 2025-11-08 13:53:27 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:27.552380 | orchestrator | 2025-11-08 13:53:27 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:27.554145 | orchestrator | 2025-11-08 13:53:27 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:27.555012 | orchestrator | 2025-11-08 13:53:27 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:30.607194 | orchestrator | 2025-11-08 13:53:30 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:30.607573 | orchestrator | 2025-11-08 13:53:30 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:30.608898 | orchestrator | 2025-11-08 13:53:30 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:30.608915 | orchestrator | 2025-11-08 13:53:30 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:33.665463 | orchestrator | 2025-11-08 13:53:33 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:33.667883 | orchestrator | 2025-11-08 13:53:33 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:33.670132 | orchestrator | 2025-11-08 13:53:33 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:33.670471 | orchestrator | 2025-11-08 13:53:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:36.716081 | orchestrator | 2025-11-08 13:53:36 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:36.717525 | orchestrator | 2025-11-08 13:53:36 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:36.719470 | orchestrator | 2025-11-08 13:53:36 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:36.719523 | orchestrator | 2025-11-08 13:53:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:39.779234 | orchestrator | 2025-11-08 13:53:39 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:39.780747 | orchestrator | 2025-11-08 13:53:39 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:39.783116 | orchestrator | 2025-11-08 13:53:39 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:39.783148 | orchestrator | 2025-11-08 13:53:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:42.839024 | orchestrator | 2025-11-08 13:53:42 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:42.840827 | orchestrator | 2025-11-08 13:53:42 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:42.842840 | orchestrator | 2025-11-08 13:53:42 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:42.843031 | orchestrator | 2025-11-08 13:53:42 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:45.889743 | orchestrator | 2025-11-08 13:53:45 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:45.889974 | orchestrator | 2025-11-08 13:53:45 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:45.890943 | orchestrator | 2025-11-08 13:53:45 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:45.890994 | orchestrator | 2025-11-08 13:53:45 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:48.935773 | orchestrator | 2025-11-08 13:53:48 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:48.937537 | orchestrator | 2025-11-08 13:53:48 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:48.940207 | orchestrator | 2025-11-08 13:53:48 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:48.940272 | orchestrator | 2025-11-08 13:53:48 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:51.990106 | orchestrator | 2025-11-08 13:53:51 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:51.991980 | orchestrator | 2025-11-08 13:53:51 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:51.993793 | orchestrator | 2025-11-08 13:53:51 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:51.994047 | orchestrator | 2025-11-08 13:53:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:55.039258 | orchestrator | 2025-11-08 13:53:55 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:55.040731 | orchestrator | 2025-11-08 13:53:55 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:55.042656 | orchestrator | 2025-11-08 13:53:55 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:55.042711 | orchestrator | 2025-11-08 13:53:55 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:53:58.091782 | orchestrator | 2025-11-08 13:53:58 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:53:58.094947 | orchestrator | 2025-11-08 13:53:58 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:53:58.099033 | orchestrator | 2025-11-08 13:53:58 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:53:58.099094 | orchestrator | 2025-11-08 13:53:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:01.159146 | orchestrator | 2025-11-08 13:54:01 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:01.162777 | orchestrator | 2025-11-08 13:54:01 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state STARTED 2025-11-08 13:54:01.163963 | orchestrator | 2025-11-08 13:54:01 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:01.164180 | orchestrator | 2025-11-08 13:54:01 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:04.215026 | orchestrator | 2025-11-08 13:54:04 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:04.222134 | orchestrator | 2025-11-08 13:54:04 | INFO  | Task 21d1394b-f46a-42f4-9ba4-883cd2343e43 is in state SUCCESS 2025-11-08 13:54:04.222309 | orchestrator | 2025-11-08 13:54:04.224255 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-08 13:54:04.224302 | orchestrator | 2.16.14 2025-11-08 13:54:04.224309 | orchestrator | 2025-11-08 13:54:04.224314 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-11-08 13:54:04.224321 | orchestrator | 2025-11-08 13:54:04.224326 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-11-08 13:54:04.224331 | orchestrator | Saturday 08 November 2025 13:43:04 +0000 (0:00:00.794) 0:00:00.794 ***** 2025-11-08 13:54:04.224337 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.224382 | orchestrator | 2025-11-08 13:54:04.224387 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-11-08 13:54:04.224392 | orchestrator | Saturday 08 November 2025 13:43:05 +0000 (0:00:01.146) 0:00:01.941 ***** 2025-11-08 13:54:04.224397 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.224402 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.224407 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.224411 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.224416 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.224420 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.224425 | orchestrator | 2025-11-08 13:54:04.224429 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-11-08 13:54:04.224435 | orchestrator | Saturday 08 November 2025 13:43:07 +0000 (0:00:01.765) 0:00:03.706 ***** 2025-11-08 13:54:04.224442 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.224448 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.224454 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.224460 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.224466 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.224476 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.224485 | orchestrator | 2025-11-08 13:54:04.224491 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-11-08 13:54:04.224499 | orchestrator | Saturday 08 November 2025 13:43:08 +0000 (0:00:00.873) 0:00:04.579 ***** 2025-11-08 13:54:04.224505 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.224511 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.224519 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.224525 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.224533 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.224540 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.224777 | orchestrator | 2025-11-08 13:54:04.224794 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-11-08 13:54:04.224800 | orchestrator | Saturday 08 November 2025 13:43:09 +0000 (0:00:00.963) 0:00:05.543 ***** 2025-11-08 13:54:04.224805 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.224809 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.224814 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.224818 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.224823 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.224827 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.224832 | orchestrator | 2025-11-08 13:54:04.224837 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-11-08 13:54:04.224865 | orchestrator | Saturday 08 November 2025 13:43:09 +0000 (0:00:00.648) 0:00:06.192 ***** 2025-11-08 13:54:04.224869 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.224874 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.224878 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.224883 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.224887 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.224892 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.224896 | orchestrator | 2025-11-08 13:54:04.224901 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-11-08 13:54:04.224905 | orchestrator | Saturday 08 November 2025 13:43:10 +0000 (0:00:00.807) 0:00:06.999 ***** 2025-11-08 13:54:04.224910 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.224914 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.224919 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.224923 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.224927 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.224931 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.224937 | orchestrator | 2025-11-08 13:54:04.224944 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-11-08 13:54:04.224953 | orchestrator | Saturday 08 November 2025 13:43:11 +0000 (0:00:01.152) 0:00:08.152 ***** 2025-11-08 13:54:04.224964 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.224972 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.224978 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.224985 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.224992 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.224999 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.225007 | orchestrator | 2025-11-08 13:54:04.225015 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-11-08 13:54:04.225019 | orchestrator | Saturday 08 November 2025 13:43:12 +0000 (0:00:00.928) 0:00:09.080 ***** 2025-11-08 13:54:04.225024 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.225028 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.225032 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.225037 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.225041 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.225045 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.225049 | orchestrator | 2025-11-08 13:54:04.225054 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-11-08 13:54:04.225058 | orchestrator | Saturday 08 November 2025 13:43:13 +0000 (0:00:00.842) 0:00:09.923 ***** 2025-11-08 13:54:04.225063 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-08 13:54:04.225067 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-08 13:54:04.225072 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-08 13:54:04.225076 | orchestrator | 2025-11-08 13:54:04.225080 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-11-08 13:54:04.225085 | orchestrator | Saturday 08 November 2025 13:43:14 +0000 (0:00:00.711) 0:00:10.635 ***** 2025-11-08 13:54:04.225089 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.225093 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.225097 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.225112 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.225117 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.225121 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.225125 | orchestrator | 2025-11-08 13:54:04.225129 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-11-08 13:54:04.225197 | orchestrator | Saturday 08 November 2025 13:43:15 +0000 (0:00:01.663) 0:00:12.298 ***** 2025-11-08 13:54:04.225203 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-08 13:54:04.225207 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-08 13:54:04.225212 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-08 13:54:04.225223 | orchestrator | 2025-11-08 13:54:04.225227 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-11-08 13:54:04.225232 | orchestrator | Saturday 08 November 2025 13:43:18 +0000 (0:00:02.969) 0:00:15.267 ***** 2025-11-08 13:54:04.225236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-08 13:54:04.225241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-08 13:54:04.225245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-08 13:54:04.225249 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.225254 | orchestrator | 2025-11-08 13:54:04.225258 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-11-08 13:54:04.225262 | orchestrator | Saturday 08 November 2025 13:43:19 +0000 (0:00:00.658) 0:00:15.926 ***** 2025-11-08 13:54:04.225269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.225280 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.225284 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.225289 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.225293 | orchestrator | 2025-11-08 13:54:04.225297 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-11-08 13:54:04.225302 | orchestrator | Saturday 08 November 2025 13:43:20 +0000 (0:00:00.852) 0:00:16.779 ***** 2025-11-08 13:54:04.225308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.225316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.225321 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.225325 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.225330 | orchestrator | 2025-11-08 13:54:04.225334 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-11-08 13:54:04.225354 | orchestrator | Saturday 08 November 2025 13:43:21 +0000 (0:00:00.726) 0:00:17.506 ***** 2025-11-08 13:54:04.225368 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-11-08 13:43:16.615284', 'end': '2025-11-08 13:43:16.878852', 'delta': '0:00:00.263568', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.225381 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-11-08 13:43:17.511560', 'end': '2025-11-08 13:43:17.781073', 'delta': '0:00:00.269513', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.225389 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-11-08 13:43:18.402350', 'end': '2025-11-08 13:43:18.690251', 'delta': '0:00:00.287901', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.225394 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.225398 | orchestrator | 2025-11-08 13:54:04.225404 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-11-08 13:54:04.225409 | orchestrator | Saturday 08 November 2025 13:43:21 +0000 (0:00:00.200) 0:00:17.707 ***** 2025-11-08 13:54:04.225414 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.225419 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.225424 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.225429 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.225434 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.225438 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.225443 | orchestrator | 2025-11-08 13:54:04.225448 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-11-08 13:54:04.225453 | orchestrator | Saturday 08 November 2025 13:43:23 +0000 (0:00:02.607) 0:00:20.315 ***** 2025-11-08 13:54:04.225458 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-08 13:54:04.225463 | orchestrator | 2025-11-08 13:54:04.225468 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-11-08 13:54:04.225473 | orchestrator | Saturday 08 November 2025 13:43:26 +0000 (0:00:02.063) 0:00:22.378 ***** 2025-11-08 13:54:04.225478 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.225483 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.225489 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.225848 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.225852 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.225857 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.225861 | orchestrator | 2025-11-08 13:54:04.225866 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-11-08 13:54:04.225870 | orchestrator | Saturday 08 November 2025 13:43:27 +0000 (0:00:01.749) 0:00:24.128 ***** 2025-11-08 13:54:04.225875 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.225879 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.225884 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.225894 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.225899 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.225903 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.225907 | orchestrator | 2025-11-08 13:54:04.225912 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-08 13:54:04.225916 | orchestrator | Saturday 08 November 2025 13:43:29 +0000 (0:00:01.358) 0:00:25.486 ***** 2025-11-08 13:54:04.225920 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.225924 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.225929 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.225933 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.225940 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.225947 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.225954 | orchestrator | 2025-11-08 13:54:04.225960 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-11-08 13:54:04.225967 | orchestrator | Saturday 08 November 2025 13:43:30 +0000 (0:00:00.922) 0:00:26.409 ***** 2025-11-08 13:54:04.225973 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.225980 | orchestrator | 2025-11-08 13:54:04.225986 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-11-08 13:54:04.225993 | orchestrator | Saturday 08 November 2025 13:43:30 +0000 (0:00:00.138) 0:00:26.548 ***** 2025-11-08 13:54:04.226000 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.226007 | orchestrator | 2025-11-08 13:54:04.226054 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-08 13:54:04.226060 | orchestrator | Saturday 08 November 2025 13:43:30 +0000 (0:00:00.272) 0:00:26.820 ***** 2025-11-08 13:54:04.226065 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.226069 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.226074 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.226097 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.226102 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.226107 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.226111 | orchestrator | 2025-11-08 13:54:04.226116 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-11-08 13:54:04.226120 | orchestrator | Saturday 08 November 2025 13:43:31 +0000 (0:00:00.785) 0:00:27.606 ***** 2025-11-08 13:54:04.226125 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.226276 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.226283 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.226287 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.226292 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.226296 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.226300 | orchestrator | 2025-11-08 13:54:04.226305 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-11-08 13:54:04.226309 | orchestrator | Saturday 08 November 2025 13:43:32 +0000 (0:00:01.735) 0:00:29.341 ***** 2025-11-08 13:54:04.226313 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.226318 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.226322 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.226326 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.226331 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.226335 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.226358 | orchestrator | 2025-11-08 13:54:04.226365 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-11-08 13:54:04.226369 | orchestrator | Saturday 08 November 2025 13:43:34 +0000 (0:00:01.091) 0:00:30.433 ***** 2025-11-08 13:54:04.226373 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.226378 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.226382 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.226425 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.226430 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.226435 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.226447 | orchestrator | 2025-11-08 13:54:04.226451 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-11-08 13:54:04.226456 | orchestrator | Saturday 08 November 2025 13:43:35 +0000 (0:00:01.288) 0:00:31.722 ***** 2025-11-08 13:54:04.226460 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.226464 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.226469 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.226479 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.226483 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.226487 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.226492 | orchestrator | 2025-11-08 13:54:04.226496 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-11-08 13:54:04.226501 | orchestrator | Saturday 08 November 2025 13:43:36 +0000 (0:00:01.248) 0:00:32.970 ***** 2025-11-08 13:54:04.226505 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.226509 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.226514 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.226518 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.226523 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.226527 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.226531 | orchestrator | 2025-11-08 13:54:04.226536 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-11-08 13:54:04.226540 | orchestrator | Saturday 08 November 2025 13:43:37 +0000 (0:00:00.979) 0:00:33.950 ***** 2025-11-08 13:54:04.226544 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.226562 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.226567 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.226571 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.226575 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.226580 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.226584 | orchestrator | 2025-11-08 13:54:04.226589 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-11-08 13:54:04.226593 | orchestrator | Saturday 08 November 2025 13:43:38 +0000 (0:00:00.597) 0:00:34.547 ***** 2025-11-08 13:54:04.226599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cd56445f--4803--5564--bbe6--d923870c576d-osd--block--cd56445f--4803--5564--bbe6--d923870c576d', 'dm-uuid-LVM-2aoSJq8qcletrfZW5Bfk49sieQy7Dha46abW5FdczHWOfObGe9YHIftk1ztHuXyc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c507e483--80d4--5110--a9ba--f918053b344b-osd--block--c507e483--80d4--5110--a9ba--f918053b344b', 'dm-uuid-LVM-IDxte1UGWzz3W0bynQvI1szgeLVOvCPVPk5ndyhCZvAs7TGhtenMfsBDN2xArfET'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.226837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cd56445f--4803--5564--bbe6--d923870c576d-osd--block--cd56445f--4803--5564--bbe6--d923870c576d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wP7Y30-oaef-Tz3m-ymot-UtJb-W1oc-7fXS08', 'scsi-0QEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20', 'scsi-SQEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.226842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c507e483--80d4--5110--a9ba--f918053b344b-osd--block--c507e483--80d4--5110--a9ba--f918053b344b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bc3e8j-yfsK-VMtb-Fnua-tbMC-u3Qa-X0FxLG', 'scsi-0QEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f', 'scsi-SQEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.226847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f393addc--5b9a--54bf--a4a6--7d44f9449202-osd--block--f393addc--5b9a--54bf--a4a6--7d44f9449202', 'dm-uuid-LVM-Psb3AsaEyaKNzCJXJeLeWO3LpbpcdM9ixBKgDMtINeobpsh63SGANUXOPb5q1Qgm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b', 'scsi-SQEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.226900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--380ddcdc--ed2e--5f5e--8a3f--001787d903df-osd--block--380ddcdc--ed2e--5f5e--8a3f--001787d903df', 'dm-uuid-LVM-XkmoqgmD3aUEVWZlaD5Lzze1pvY5tczAOFqk6zqgtCSDy8gwaoid5OsY42dcffXC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-12-59-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.226923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.226952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227202 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.227207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f393addc--5b9a--54bf--a4a6--7d44f9449202-osd--block--f393addc--5b9a--54bf--a4a6--7d44f9449202'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dFekCi-fde7-ud2U-Fmt7-Fp42-q7ek-vCoFvX', 'scsi-0QEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb', 'scsi-SQEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--380ddcdc--ed2e--5f5e--8a3f--001787d903df-osd--block--380ddcdc--ed2e--5f5e--8a3f--001787d903df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iYWukL-r4Eh-juxx-rEgA-KLFr-hV2P-fzIfJo', 'scsi-0QEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c', 'scsi-SQEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d', 'scsi-SQEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56ba2a68--c761--5674--9bd2--a2481e6b0580-osd--block--56ba2a68--c761--5674--9bd2--a2481e6b0580', 'dm-uuid-LVM-a02JLNVcMB1MMongJvoDhkkHadmwNkJLJ7TOO1SYtEG3RwKJnq6tfFrJWMWuJyDz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5af892c--b8e4--5298--acf4--1670635abe97-osd--block--b5af892c--b8e4--5298--acf4--1670635abe97', 'dm-uuid-LVM-CMLB1kfMUkDAmKaUYr9nLL1AtWJTZsRIFc3JrLIKvs6ht3G9mvyk6WvOaWdhdWof'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227331 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.227335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part1', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part14', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part15', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part16', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--56ba2a68--c761--5674--9bd2--a2481e6b0580-osd--block--56ba2a68--c761--5674--9bd2--a2481e6b0580'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tLjwv0-Oeut-hwgd-noei-DeUf-v6Mm-dsBb3I', 'scsi-0QEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff', 'scsi-SQEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b5af892c--b8e4--5298--acf4--1670635abe97-osd--block--b5af892c--b8e4--5298--acf4--1670635abe97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vi1Q5v-sZk0-8B4D-Vvxf-s8oz-czzq-liaWuw', 'scsi-0QEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36', 'scsi-SQEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995', 'scsi-SQEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2', 'scsi-SQEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part1', 'scsi-SQEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part14', 'scsi-SQEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part15', 'scsi-SQEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part16', 'scsi-SQEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227761 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.227765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4', 'scsi-SQEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part1', 'scsi-SQEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part14', 'scsi-SQEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part15', 'scsi-SQEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part16', 'scsi-SQEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-12-59-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.227924 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.227929 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.227933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.227960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.228007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.228017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.228024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.228036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:54:04.228043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b', 'scsi-SQEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part1', 'scsi-SQEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part14', 'scsi-SQEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part15', 'scsi-SQEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part16', 'scsi-SQEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.228105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:54:04.228112 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.228117 | orchestrator | 2025-11-08 13:54:04.228121 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-11-08 13:54:04.228126 | orchestrator | Saturday 08 November 2025 13:43:39 +0000 (0:00:01.702) 0:00:36.250 ***** 2025-11-08 13:54:04.228132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cd56445f--4803--5564--bbe6--d923870c576d-osd--block--cd56445f--4803--5564--bbe6--d923870c576d', 'dm-uuid-LVM-2aoSJq8qcletrfZW5Bfk49sieQy7Dha46abW5FdczHWOfObGe9YHIftk1ztHuXyc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c507e483--80d4--5110--a9ba--f918053b344b-osd--block--c507e483--80d4--5110--a9ba--f918053b344b', 'dm-uuid-LVM-IDxte1UGWzz3W0bynQvI1szgeLVOvCPVPk5ndyhCZvAs7TGhtenMfsBDN2xArfET'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228280 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228290 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228327 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f393addc--5b9a--54bf--a4a6--7d44f9449202-osd--block--f393addc--5b9a--54bf--a4a6--7d44f9449202', 'dm-uuid-LVM-Psb3AsaEyaKNzCJXJeLeWO3LpbpcdM9ixBKgDMtINeobpsh63SGANUXOPb5q1Qgm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228333 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--380ddcdc--ed2e--5f5e--8a3f--001787d903df-osd--block--380ddcdc--ed2e--5f5e--8a3f--001787d903df', 'dm-uuid-LVM-XkmoqgmD3aUEVWZlaD5Lzze1pvY5tczAOFqk6zqgtCSDy8gwaoid5OsY42dcffXC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228403 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228443 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228458 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228467 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228501 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cd56445f--4803--5564--bbe6--d923870c576d-osd--block--cd56445f--4803--5564--bbe6--d923870c576d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wP7Y30-oaef-Tz3m-ymot-UtJb-W1oc-7fXS08', 'scsi-0QEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20', 'scsi-SQEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c507e483--80d4--5110--a9ba--f918053b344b-osd--block--c507e483--80d4--5110--a9ba--f918053b344b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bc3e8j-yfsK-VMtb-Fnua-tbMC-u3Qa-X0FxLG', 'scsi-0QEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f', 'scsi-SQEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b', 'scsi-SQEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228527 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228532 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-12-59-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228644 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56ba2a68--c761--5674--9bd2--a2481e6b0580-osd--block--56ba2a68--c761--5674--9bd2--a2481e6b0580', 'dm-uuid-LVM-a02JLNVcMB1MMongJvoDhkkHadmwNkJLJ7TOO1SYtEG3RwKJnq6tfFrJWMWuJyDz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228652 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228660 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228668 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228673 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228677 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228723 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228735 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228746 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5af892c--b8e4--5298--acf4--1670635abe97-osd--block--b5af892c--b8e4--5298--acf4--1670635abe97', 'dm-uuid-LVM-CMLB1kfMUkDAmKaUYr9nLL1AtWJTZsRIFc3JrLIKvs6ht3G9mvyk6WvOaWdhdWof'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228750 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228755 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228791 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2', 'scsi-SQEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part1', 'scsi-SQEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part14', 'scsi-SQEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part15', 'scsi-SQEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part16', 'scsi-SQEMU_QEMU_HARDDISK_9492338a-04c6-4dbd-b6d4-47c0f3d58df2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228807 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228812 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228824 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228828 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.228863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228869 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228885 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f393addc--5b9a--54bf--a4a6--7d44f9449202-osd--block--f393addc--5b9a--54bf--a4a6--7d44f9449202'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dFekCi-fde7-ud2U-Fmt7-Fp42-q7ek-vCoFvX', 'scsi-0QEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb', 'scsi-SQEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228919 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--380ddcdc--ed2e--5f5e--8a3f--001787d903df-osd--block--380ddcdc--ed2e--5f5e--8a3f--001787d903df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iYWukL-r4Eh-juxx-rEgA-KLFr-hV2P-fzIfJo', 'scsi-0QEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c', 'scsi-SQEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d', 'scsi-SQEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228944 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228951 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.228958 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.229004 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229032 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229037 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.229044 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229048 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229053 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229057 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229095 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229101 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229112 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229116 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229125 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229160 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229184 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229191 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229230 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4', 'scsi-SQEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part1', 'scsi-SQEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part14', 'scsi-SQEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part15', 'scsi-SQEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part16', 'scsi-SQEMU_QEMU_HARDDISK_e00aeba1-5189-4db1-bd39-a4f48e1f1ff4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229241 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-12-59-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229248 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229281 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part1', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part14', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part15', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part16', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229291 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.229296 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--56ba2a68--c761--5674--9bd2--a2481e6b0580-osd--block--56ba2a68--c761--5674--9bd2--a2481e6b0580'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tLjwv0-Oeut-hwgd-noei-DeUf-v6Mm-dsBb3I', 'scsi-0QEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff', 'scsi-SQEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229303 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229307 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b5af892c--b8e4--5298--acf4--1670635abe97-osd--block--b5af892c--b8e4--5298--acf4--1670635abe97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vi1Q5v-sZk0-8B4D-Vvxf-s8oz-czzq-liaWuw', 'scsi-0QEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36', 'scsi-SQEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229312 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229397 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b', 'scsi-SQEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part1', 'scsi-SQEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part14', 'scsi-SQEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part15', 'scsi-SQEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part16', 'scsi-SQEMU_QEMU_HARDDISK_cd27e0c9-617b-4a12-acb0-00efb73b425b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995', 'scsi-SQEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229416 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229421 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:54:04.229429 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.229433 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.229437 | orchestrator | 2025-11-08 13:54:04.229472 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-11-08 13:54:04.229479 | orchestrator | Saturday 08 November 2025 13:43:41 +0000 (0:00:01.739) 0:00:37.990 ***** 2025-11-08 13:54:04.229483 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.229488 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.229492 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.229504 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.229508 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.229512 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.229516 | orchestrator | 2025-11-08 13:54:04.229521 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-11-08 13:54:04.229525 | orchestrator | Saturday 08 November 2025 13:43:42 +0000 (0:00:01.231) 0:00:39.222 ***** 2025-11-08 13:54:04.229529 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.229533 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.229537 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.229541 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.229545 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.229549 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.229558 | orchestrator | 2025-11-08 13:54:04.229563 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-08 13:54:04.229567 | orchestrator | Saturday 08 November 2025 13:43:43 +0000 (0:00:00.654) 0:00:39.877 ***** 2025-11-08 13:54:04.229571 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.229575 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.229579 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.229583 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.229587 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.229591 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.229596 | orchestrator | 2025-11-08 13:54:04.229600 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-08 13:54:04.229604 | orchestrator | Saturday 08 November 2025 13:43:44 +0000 (0:00:00.940) 0:00:40.817 ***** 2025-11-08 13:54:04.229608 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.229612 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.229616 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.229620 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.229625 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.229629 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.229633 | orchestrator | 2025-11-08 13:54:04.229637 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-08 13:54:04.229644 | orchestrator | Saturday 08 November 2025 13:43:45 +0000 (0:00:00.839) 0:00:41.656 ***** 2025-11-08 13:54:04.229648 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.229652 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.229656 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.229660 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.229665 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.229669 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.229673 | orchestrator | 2025-11-08 13:54:04.229677 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-08 13:54:04.229681 | orchestrator | Saturday 08 November 2025 13:43:46 +0000 (0:00:00.860) 0:00:42.516 ***** 2025-11-08 13:54:04.229685 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.229689 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.229693 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.229697 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.229701 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.229705 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.229710 | orchestrator | 2025-11-08 13:54:04.229718 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-11-08 13:54:04.229722 | orchestrator | Saturday 08 November 2025 13:43:47 +0000 (0:00:00.848) 0:00:43.364 ***** 2025-11-08 13:54:04.229726 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-11-08 13:54:04.229731 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-11-08 13:54:04.229735 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-11-08 13:54:04.229739 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-11-08 13:54:04.229743 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-08 13:54:04.229747 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-11-08 13:54:04.229751 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-08 13:54:04.229755 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-11-08 13:54:04.229759 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-11-08 13:54:04.229763 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-11-08 13:54:04.229773 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-08 13:54:04.229777 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-11-08 13:54:04.229781 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-11-08 13:54:04.229785 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-11-08 13:54:04.229789 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-11-08 13:54:04.229793 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-11-08 13:54:04.229797 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-11-08 13:54:04.229801 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-11-08 13:54:04.229805 | orchestrator | 2025-11-08 13:54:04.229809 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-11-08 13:54:04.229813 | orchestrator | Saturday 08 November 2025 13:43:50 +0000 (0:00:03.447) 0:00:46.812 ***** 2025-11-08 13:54:04.229818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-08 13:54:04.229822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-08 13:54:04.229826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-08 13:54:04.229830 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.229834 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-08 13:54:04.229839 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-08 13:54:04.229843 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-08 13:54:04.229847 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.229851 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-08 13:54:04.229868 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-08 13:54:04.229873 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-08 13:54:04.229877 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-08 13:54:04.229881 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.229885 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-08 13:54:04.229889 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-08 13:54:04.229894 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.229898 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-11-08 13:54:04.229902 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-11-08 13:54:04.229906 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-11-08 13:54:04.229910 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.229914 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-11-08 13:54:04.229918 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-11-08 13:54:04.229922 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-11-08 13:54:04.229926 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.229937 | orchestrator | 2025-11-08 13:54:04.229943 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-11-08 13:54:04.229949 | orchestrator | Saturday 08 November 2025 13:43:51 +0000 (0:00:00.752) 0:00:47.564 ***** 2025-11-08 13:54:04.229955 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.229961 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.229967 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.229974 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.229981 | orchestrator | 2025-11-08 13:54:04.229987 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-08 13:54:04.229994 | orchestrator | Saturday 08 November 2025 13:43:52 +0000 (0:00:01.052) 0:00:48.617 ***** 2025-11-08 13:54:04.230001 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230008 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230041 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230046 | orchestrator | 2025-11-08 13:54:04.230050 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-08 13:54:04.230054 | orchestrator | Saturday 08 November 2025 13:43:52 +0000 (0:00:00.418) 0:00:49.035 ***** 2025-11-08 13:54:04.230058 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230062 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230065 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230069 | orchestrator | 2025-11-08 13:54:04.230073 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-08 13:54:04.230077 | orchestrator | Saturday 08 November 2025 13:43:53 +0000 (0:00:00.579) 0:00:49.615 ***** 2025-11-08 13:54:04.230080 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230084 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230088 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230091 | orchestrator | 2025-11-08 13:54:04.230095 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-08 13:54:04.230099 | orchestrator | Saturday 08 November 2025 13:43:54 +0000 (0:00:00.805) 0:00:50.420 ***** 2025-11-08 13:54:04.230103 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.230107 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.230110 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.230115 | orchestrator | 2025-11-08 13:54:04.230119 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-08 13:54:04.230124 | orchestrator | Saturday 08 November 2025 13:43:54 +0000 (0:00:00.715) 0:00:51.136 ***** 2025-11-08 13:54:04.230128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.230132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.230137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.230141 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230146 | orchestrator | 2025-11-08 13:54:04.230150 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-08 13:54:04.230154 | orchestrator | Saturday 08 November 2025 13:43:55 +0000 (0:00:00.483) 0:00:51.620 ***** 2025-11-08 13:54:04.230159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.230163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.230168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.230172 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230176 | orchestrator | 2025-11-08 13:54:04.230181 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-08 13:54:04.230185 | orchestrator | Saturday 08 November 2025 13:43:55 +0000 (0:00:00.415) 0:00:52.036 ***** 2025-11-08 13:54:04.230189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.230193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.230198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.230206 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230210 | orchestrator | 2025-11-08 13:54:04.230215 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-08 13:54:04.230219 | orchestrator | Saturday 08 November 2025 13:43:56 +0000 (0:00:00.331) 0:00:52.367 ***** 2025-11-08 13:54:04.230224 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.230228 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.230232 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.230236 | orchestrator | 2025-11-08 13:54:04.230241 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-08 13:54:04.230245 | orchestrator | Saturday 08 November 2025 13:43:56 +0000 (0:00:00.300) 0:00:52.668 ***** 2025-11-08 13:54:04.230249 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-08 13:54:04.230254 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-08 13:54:04.230274 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-08 13:54:04.230279 | orchestrator | 2025-11-08 13:54:04.230284 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-11-08 13:54:04.230288 | orchestrator | Saturday 08 November 2025 13:43:57 +0000 (0:00:01.298) 0:00:53.967 ***** 2025-11-08 13:54:04.230293 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-08 13:54:04.230297 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-08 13:54:04.230302 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-08 13:54:04.230306 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-08 13:54:04.230311 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-08 13:54:04.230316 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-08 13:54:04.230320 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-08 13:54:04.230324 | orchestrator | 2025-11-08 13:54:04.230329 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-11-08 13:54:04.230333 | orchestrator | Saturday 08 November 2025 13:43:58 +0000 (0:00:01.278) 0:00:55.245 ***** 2025-11-08 13:54:04.230349 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-08 13:54:04.230354 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-08 13:54:04.230358 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-08 13:54:04.230362 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-08 13:54:04.230365 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-08 13:54:04.230369 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-08 13:54:04.230376 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-08 13:54:04.230380 | orchestrator | 2025-11-08 13:54:04.230383 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-08 13:54:04.230387 | orchestrator | Saturday 08 November 2025 13:44:00 +0000 (0:00:02.090) 0:00:57.335 ***** 2025-11-08 13:54:04.230391 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.230396 | orchestrator | 2025-11-08 13:54:04.230400 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-08 13:54:04.230403 | orchestrator | Saturday 08 November 2025 13:44:03 +0000 (0:00:02.292) 0:00:59.628 ***** 2025-11-08 13:54:04.230407 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.230411 | orchestrator | 2025-11-08 13:54:04.230421 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-08 13:54:04.230425 | orchestrator | Saturday 08 November 2025 13:44:05 +0000 (0:00:02.493) 0:01:02.122 ***** 2025-11-08 13:54:04.230429 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230433 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230436 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230440 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.230444 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.230448 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.230451 | orchestrator | 2025-11-08 13:54:04.230455 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-08 13:54:04.230459 | orchestrator | Saturday 08 November 2025 13:44:07 +0000 (0:00:01.329) 0:01:03.452 ***** 2025-11-08 13:54:04.230463 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.230466 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.230470 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.230474 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.230477 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.230481 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.230485 | orchestrator | 2025-11-08 13:54:04.230489 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-08 13:54:04.230492 | orchestrator | Saturday 08 November 2025 13:44:08 +0000 (0:00:01.404) 0:01:04.856 ***** 2025-11-08 13:54:04.230496 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.230500 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.230504 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.230507 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.230511 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.230515 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.230518 | orchestrator | 2025-11-08 13:54:04.230522 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-08 13:54:04.230526 | orchestrator | Saturday 08 November 2025 13:44:10 +0000 (0:00:01.805) 0:01:06.662 ***** 2025-11-08 13:54:04.230529 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.230533 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.230537 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.230541 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.230544 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.230548 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.230552 | orchestrator | 2025-11-08 13:54:04.230555 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-08 13:54:04.230559 | orchestrator | Saturday 08 November 2025 13:44:11 +0000 (0:00:01.528) 0:01:08.191 ***** 2025-11-08 13:54:04.230563 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230567 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230570 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230574 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.230578 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.230595 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.230600 | orchestrator | 2025-11-08 13:54:04.230603 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-08 13:54:04.230607 | orchestrator | Saturday 08 November 2025 13:44:13 +0000 (0:00:01.500) 0:01:09.692 ***** 2025-11-08 13:54:04.230611 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230615 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230618 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230622 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.230626 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.230629 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.230633 | orchestrator | 2025-11-08 13:54:04.230637 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-08 13:54:04.230641 | orchestrator | Saturday 08 November 2025 13:44:13 +0000 (0:00:00.593) 0:01:10.285 ***** 2025-11-08 13:54:04.230645 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230651 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230655 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230659 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.230663 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.230666 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.230670 | orchestrator | 2025-11-08 13:54:04.230674 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-08 13:54:04.230678 | orchestrator | Saturday 08 November 2025 13:44:14 +0000 (0:00:00.901) 0:01:11.187 ***** 2025-11-08 13:54:04.230681 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.230685 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.230689 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.230693 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.230696 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.230700 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.230704 | orchestrator | 2025-11-08 13:54:04.230707 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-08 13:54:04.230711 | orchestrator | Saturday 08 November 2025 13:44:16 +0000 (0:00:01.189) 0:01:12.376 ***** 2025-11-08 13:54:04.230715 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.230719 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.230722 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.230726 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.230730 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.230733 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.230737 | orchestrator | 2025-11-08 13:54:04.230743 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-08 13:54:04.230747 | orchestrator | Saturday 08 November 2025 13:44:17 +0000 (0:00:01.949) 0:01:14.326 ***** 2025-11-08 13:54:04.230751 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230755 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230758 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230762 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.230766 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.230770 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.230773 | orchestrator | 2025-11-08 13:54:04.230777 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-08 13:54:04.230781 | orchestrator | Saturday 08 November 2025 13:44:18 +0000 (0:00:00.654) 0:01:14.980 ***** 2025-11-08 13:54:04.230785 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230788 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230792 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230796 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.230800 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.230803 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.230807 | orchestrator | 2025-11-08 13:54:04.230811 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-08 13:54:04.230815 | orchestrator | Saturday 08 November 2025 13:44:19 +0000 (0:00:00.933) 0:01:15.914 ***** 2025-11-08 13:54:04.230818 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.230822 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.230826 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.230829 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.230833 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.230837 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.230841 | orchestrator | 2025-11-08 13:54:04.230845 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-08 13:54:04.230848 | orchestrator | Saturday 08 November 2025 13:44:20 +0000 (0:00:00.772) 0:01:16.687 ***** 2025-11-08 13:54:04.230852 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.230856 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.230860 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.230863 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.230867 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.230871 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.230878 | orchestrator | 2025-11-08 13:54:04.230882 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-08 13:54:04.230886 | orchestrator | Saturday 08 November 2025 13:44:21 +0000 (0:00:01.367) 0:01:18.055 ***** 2025-11-08 13:54:04.230890 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.230894 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.230897 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.230901 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.230905 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.230909 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.230912 | orchestrator | 2025-11-08 13:54:04.230916 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-08 13:54:04.230920 | orchestrator | Saturday 08 November 2025 13:44:22 +0000 (0:00:00.742) 0:01:18.798 ***** 2025-11-08 13:54:04.230924 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230927 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230931 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230936 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.230942 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.230948 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.230954 | orchestrator | 2025-11-08 13:54:04.230960 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-08 13:54:04.230966 | orchestrator | Saturday 08 November 2025 13:44:23 +0000 (0:00:00.941) 0:01:19.740 ***** 2025-11-08 13:54:04.230973 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.230980 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.230986 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.230993 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231011 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231015 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231019 | orchestrator | 2025-11-08 13:54:04.231023 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-08 13:54:04.231027 | orchestrator | Saturday 08 November 2025 13:44:24 +0000 (0:00:00.645) 0:01:20.385 ***** 2025-11-08 13:54:04.231031 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231034 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231038 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231042 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.231045 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.231049 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.231053 | orchestrator | 2025-11-08 13:54:04.231057 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-08 13:54:04.231060 | orchestrator | Saturday 08 November 2025 13:44:25 +0000 (0:00:00.994) 0:01:21.380 ***** 2025-11-08 13:54:04.231064 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.231068 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.231072 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.231075 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.231079 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.231083 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.231087 | orchestrator | 2025-11-08 13:54:04.231090 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-08 13:54:04.231094 | orchestrator | Saturday 08 November 2025 13:44:25 +0000 (0:00:00.648) 0:01:22.029 ***** 2025-11-08 13:54:04.231098 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.231102 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.231105 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.231109 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.231113 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.231116 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.231120 | orchestrator | 2025-11-08 13:54:04.231124 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-11-08 13:54:04.231128 | orchestrator | Saturday 08 November 2025 13:44:27 +0000 (0:00:01.368) 0:01:23.398 ***** 2025-11-08 13:54:04.231135 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.231139 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.231143 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.231146 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.231150 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.231154 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.231157 | orchestrator | 2025-11-08 13:54:04.231164 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-11-08 13:54:04.231168 | orchestrator | Saturday 08 November 2025 13:44:28 +0000 (0:00:01.438) 0:01:24.836 ***** 2025-11-08 13:54:04.231172 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.231176 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.231179 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.231183 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.231187 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.231190 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.231194 | orchestrator | 2025-11-08 13:54:04.231198 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-11-08 13:54:04.231202 | orchestrator | Saturday 08 November 2025 13:44:30 +0000 (0:00:02.316) 0:01:27.152 ***** 2025-11-08 13:54:04.231205 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.231209 | orchestrator | 2025-11-08 13:54:04.231213 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-11-08 13:54:04.231217 | orchestrator | Saturday 08 November 2025 13:44:31 +0000 (0:00:01.182) 0:01:28.335 ***** 2025-11-08 13:54:04.231220 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231224 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231228 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231231 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231235 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231239 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231242 | orchestrator | 2025-11-08 13:54:04.231246 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-11-08 13:54:04.231250 | orchestrator | Saturday 08 November 2025 13:44:32 +0000 (0:00:00.677) 0:01:29.013 ***** 2025-11-08 13:54:04.231254 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231257 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231261 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231265 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231268 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231272 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231276 | orchestrator | 2025-11-08 13:54:04.231280 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-11-08 13:54:04.231283 | orchestrator | Saturday 08 November 2025 13:44:33 +0000 (0:00:00.867) 0:01:29.881 ***** 2025-11-08 13:54:04.231287 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-08 13:54:04.231291 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-08 13:54:04.231295 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-08 13:54:04.231298 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-08 13:54:04.231302 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-08 13:54:04.231306 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-08 13:54:04.231309 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-08 13:54:04.231313 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-08 13:54:04.231317 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-08 13:54:04.231324 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-08 13:54:04.231357 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-08 13:54:04.231362 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-08 13:54:04.231366 | orchestrator | 2025-11-08 13:54:04.231370 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-11-08 13:54:04.231374 | orchestrator | Saturday 08 November 2025 13:44:34 +0000 (0:00:01.314) 0:01:31.196 ***** 2025-11-08 13:54:04.231377 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.231381 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.231385 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.231388 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.231392 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.231396 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.231399 | orchestrator | 2025-11-08 13:54:04.231403 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-11-08 13:54:04.231407 | orchestrator | Saturday 08 November 2025 13:44:35 +0000 (0:00:01.089) 0:01:32.285 ***** 2025-11-08 13:54:04.231411 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231414 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231418 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231422 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231425 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231429 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231433 | orchestrator | 2025-11-08 13:54:04.231436 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-11-08 13:54:04.231440 | orchestrator | Saturday 08 November 2025 13:44:36 +0000 (0:00:00.512) 0:01:32.797 ***** 2025-11-08 13:54:04.231444 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231447 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231451 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231455 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231458 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231462 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231466 | orchestrator | 2025-11-08 13:54:04.231469 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-11-08 13:54:04.231473 | orchestrator | Saturday 08 November 2025 13:44:37 +0000 (0:00:00.673) 0:01:33.470 ***** 2025-11-08 13:54:04.231477 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231481 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231484 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231488 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231492 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231496 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231499 | orchestrator | 2025-11-08 13:54:04.231503 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-11-08 13:54:04.231507 | orchestrator | Saturday 08 November 2025 13:44:37 +0000 (0:00:00.486) 0:01:33.957 ***** 2025-11-08 13:54:04.231510 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.231514 | orchestrator | 2025-11-08 13:54:04.231541 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-11-08 13:54:04.231545 | orchestrator | Saturday 08 November 2025 13:44:38 +0000 (0:00:00.989) 0:01:34.946 ***** 2025-11-08 13:54:04.231549 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.231553 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.231556 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.231560 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.231564 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.231567 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.231571 | orchestrator | 2025-11-08 13:54:04.231575 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-11-08 13:54:04.231582 | orchestrator | Saturday 08 November 2025 13:45:30 +0000 (0:00:52.394) 0:02:27.341 ***** 2025-11-08 13:54:04.231586 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-08 13:54:04.231590 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-08 13:54:04.231593 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-08 13:54:04.231597 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231601 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-08 13:54:04.231604 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-08 13:54:04.231608 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-08 13:54:04.231612 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231615 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-08 13:54:04.231619 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-08 13:54:04.231623 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-08 13:54:04.231627 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231630 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-08 13:54:04.231634 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-08 13:54:04.231638 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-08 13:54:04.231641 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231645 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-08 13:54:04.231649 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-08 13:54:04.231653 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-08 13:54:04.231656 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231673 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-08 13:54:04.231677 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-08 13:54:04.231681 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-08 13:54:04.231685 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231689 | orchestrator | 2025-11-08 13:54:04.231692 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-11-08 13:54:04.231696 | orchestrator | Saturday 08 November 2025 13:45:31 +0000 (0:00:00.579) 0:02:27.921 ***** 2025-11-08 13:54:04.231700 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231704 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231707 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231711 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231715 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231719 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231722 | orchestrator | 2025-11-08 13:54:04.231726 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-11-08 13:54:04.231730 | orchestrator | Saturday 08 November 2025 13:45:32 +0000 (0:00:00.688) 0:02:28.610 ***** 2025-11-08 13:54:04.231734 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231738 | orchestrator | 2025-11-08 13:54:04.231741 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-11-08 13:54:04.231745 | orchestrator | Saturday 08 November 2025 13:45:32 +0000 (0:00:00.142) 0:02:28.752 ***** 2025-11-08 13:54:04.231749 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231752 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231756 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231760 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231767 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231770 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231774 | orchestrator | 2025-11-08 13:54:04.231778 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-11-08 13:54:04.231782 | orchestrator | Saturday 08 November 2025 13:45:33 +0000 (0:00:00.666) 0:02:29.418 ***** 2025-11-08 13:54:04.231785 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231789 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231793 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231797 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231803 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231807 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231811 | orchestrator | 2025-11-08 13:54:04.231814 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-11-08 13:54:04.231818 | orchestrator | Saturday 08 November 2025 13:45:34 +0000 (0:00:00.970) 0:02:30.389 ***** 2025-11-08 13:54:04.231822 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231826 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231829 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231833 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231837 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231840 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231844 | orchestrator | 2025-11-08 13:54:04.231848 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-11-08 13:54:04.231852 | orchestrator | Saturday 08 November 2025 13:45:34 +0000 (0:00:00.818) 0:02:31.207 ***** 2025-11-08 13:54:04.231855 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.231859 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.231863 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.231867 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.231870 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.231874 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.231878 | orchestrator | 2025-11-08 13:54:04.231882 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-11-08 13:54:04.231885 | orchestrator | Saturday 08 November 2025 13:45:37 +0000 (0:00:02.806) 0:02:34.013 ***** 2025-11-08 13:54:04.231889 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.231893 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.231896 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.231900 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.231904 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.231907 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.231911 | orchestrator | 2025-11-08 13:54:04.231915 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-11-08 13:54:04.231919 | orchestrator | Saturday 08 November 2025 13:45:38 +0000 (0:00:00.652) 0:02:34.666 ***** 2025-11-08 13:54:04.231922 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.231927 | orchestrator | 2025-11-08 13:54:04.231931 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-11-08 13:54:04.231936 | orchestrator | Saturday 08 November 2025 13:45:39 +0000 (0:00:01.524) 0:02:36.191 ***** 2025-11-08 13:54:04.231942 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.231949 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.231955 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.231961 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.231968 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.231975 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.231981 | orchestrator | 2025-11-08 13:54:04.231988 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-11-08 13:54:04.231992 | orchestrator | Saturday 08 November 2025 13:45:41 +0000 (0:00:01.593) 0:02:37.785 ***** 2025-11-08 13:54:04.231996 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.232003 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.232007 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.232011 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232015 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232018 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232022 | orchestrator | 2025-11-08 13:54:04.232026 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-11-08 13:54:04.232029 | orchestrator | Saturday 08 November 2025 13:45:42 +0000 (0:00:01.132) 0:02:38.917 ***** 2025-11-08 13:54:04.232033 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.232037 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.232055 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.232059 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232063 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232067 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232070 | orchestrator | 2025-11-08 13:54:04.232074 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-11-08 13:54:04.232078 | orchestrator | Saturday 08 November 2025 13:45:43 +0000 (0:00:01.048) 0:02:39.966 ***** 2025-11-08 13:54:04.232082 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.232085 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.232089 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.232093 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232097 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232100 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232104 | orchestrator | 2025-11-08 13:54:04.232108 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-11-08 13:54:04.232112 | orchestrator | Saturday 08 November 2025 13:45:44 +0000 (0:00:00.651) 0:02:40.617 ***** 2025-11-08 13:54:04.232116 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.232119 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.232123 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.232127 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232130 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232134 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232138 | orchestrator | 2025-11-08 13:54:04.232141 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-11-08 13:54:04.232145 | orchestrator | Saturday 08 November 2025 13:45:45 +0000 (0:00:00.792) 0:02:41.410 ***** 2025-11-08 13:54:04.232149 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.232153 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.232156 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.232160 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232164 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232167 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232171 | orchestrator | 2025-11-08 13:54:04.232175 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-11-08 13:54:04.232179 | orchestrator | Saturday 08 November 2025 13:45:45 +0000 (0:00:00.730) 0:02:42.141 ***** 2025-11-08 13:54:04.232182 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.232186 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.232194 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.232198 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232201 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232205 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232209 | orchestrator | 2025-11-08 13:54:04.232213 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-11-08 13:54:04.232217 | orchestrator | Saturday 08 November 2025 13:45:46 +0000 (0:00:01.071) 0:02:43.212 ***** 2025-11-08 13:54:04.232220 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.232224 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.232228 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.232231 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232238 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232242 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232246 | orchestrator | 2025-11-08 13:54:04.232249 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-11-08 13:54:04.232253 | orchestrator | Saturday 08 November 2025 13:45:47 +0000 (0:00:00.868) 0:02:44.080 ***** 2025-11-08 13:54:04.232257 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.232261 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.232264 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.232268 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.232272 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.232276 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.232279 | orchestrator | 2025-11-08 13:54:04.232283 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-11-08 13:54:04.232287 | orchestrator | Saturday 08 November 2025 13:45:49 +0000 (0:00:01.554) 0:02:45.635 ***** 2025-11-08 13:54:04.232291 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.232295 | orchestrator | 2025-11-08 13:54:04.232298 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-11-08 13:54:04.232302 | orchestrator | Saturday 08 November 2025 13:45:50 +0000 (0:00:01.260) 0:02:46.895 ***** 2025-11-08 13:54:04.232306 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-11-08 13:54:04.232310 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-11-08 13:54:04.232313 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-11-08 13:54:04.232317 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-11-08 13:54:04.232321 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-11-08 13:54:04.232325 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-11-08 13:54:04.232328 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-11-08 13:54:04.232332 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-11-08 13:54:04.232336 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-11-08 13:54:04.232372 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-11-08 13:54:04.232376 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-11-08 13:54:04.232380 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-11-08 13:54:04.232384 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-11-08 13:54:04.232388 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-11-08 13:54:04.232391 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-11-08 13:54:04.232395 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-11-08 13:54:04.232399 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-11-08 13:54:04.232403 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-11-08 13:54:04.232420 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-11-08 13:54:04.232424 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-11-08 13:54:04.232428 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-11-08 13:54:04.232432 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-11-08 13:54:04.232435 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-11-08 13:54:04.232439 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-11-08 13:54:04.232443 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-11-08 13:54:04.232447 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-11-08 13:54:04.232450 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-11-08 13:54:04.232454 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-11-08 13:54:04.232458 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-11-08 13:54:04.232465 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-11-08 13:54:04.232469 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-11-08 13:54:04.232472 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-11-08 13:54:04.232476 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-11-08 13:54:04.232480 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-11-08 13:54:04.232484 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-11-08 13:54:04.232487 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-11-08 13:54:04.232491 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-11-08 13:54:04.232495 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-11-08 13:54:04.232499 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-11-08 13:54:04.232502 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-11-08 13:54:04.232506 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-11-08 13:54:04.232510 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-11-08 13:54:04.232518 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-11-08 13:54:04.232522 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-11-08 13:54:04.232526 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-08 13:54:04.232530 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-08 13:54:04.232534 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-11-08 13:54:04.232538 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-11-08 13:54:04.232541 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-08 13:54:04.232545 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-08 13:54:04.232549 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-08 13:54:04.232553 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-08 13:54:04.232556 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-11-08 13:54:04.232560 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-08 13:54:04.232564 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-08 13:54:04.232568 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-08 13:54:04.232571 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-08 13:54:04.232575 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-11-08 13:54:04.232579 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-08 13:54:04.232583 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-08 13:54:04.232586 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-08 13:54:04.232590 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-08 13:54:04.232594 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-08 13:54:04.232598 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-08 13:54:04.232601 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-08 13:54:04.232605 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-08 13:54:04.232609 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-08 13:54:04.232613 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-08 13:54:04.232616 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-08 13:54:04.232620 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-08 13:54:04.232627 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-08 13:54:04.232631 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-08 13:54:04.232635 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-08 13:54:04.232638 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-08 13:54:04.232642 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-08 13:54:04.232646 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-08 13:54:04.232662 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-08 13:54:04.232666 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-08 13:54:04.232670 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-11-08 13:54:04.232674 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-08 13:54:04.232678 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-08 13:54:04.232683 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-11-08 13:54:04.232690 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-11-08 13:54:04.232699 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-08 13:54:04.232706 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-11-08 13:54:04.232712 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-11-08 13:54:04.232718 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-08 13:54:04.232723 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-11-08 13:54:04.232729 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-11-08 13:54:04.232735 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-11-08 13:54:04.232740 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-08 13:54:04.232746 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-11-08 13:54:04.232753 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-11-08 13:54:04.232759 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-08 13:54:04.232765 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-11-08 13:54:04.232771 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-11-08 13:54:04.232777 | orchestrator | 2025-11-08 13:54:04.232784 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-11-08 13:54:04.232790 | orchestrator | Saturday 08 November 2025 13:45:58 +0000 (0:00:07.601) 0:02:54.496 ***** 2025-11-08 13:54:04.232795 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232801 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232810 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232814 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.232818 | orchestrator | 2025-11-08 13:54:04.232822 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-11-08 13:54:04.232826 | orchestrator | Saturday 08 November 2025 13:45:59 +0000 (0:00:01.161) 0:02:55.657 ***** 2025-11-08 13:54:04.232830 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.232834 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.232838 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.232842 | orchestrator | 2025-11-08 13:54:04.232845 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-11-08 13:54:04.232853 | orchestrator | Saturday 08 November 2025 13:46:00 +0000 (0:00:00.927) 0:02:56.585 ***** 2025-11-08 13:54:04.232857 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.232861 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.232865 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.232869 | orchestrator | 2025-11-08 13:54:04.232873 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-11-08 13:54:04.232876 | orchestrator | Saturday 08 November 2025 13:46:01 +0000 (0:00:01.112) 0:02:57.697 ***** 2025-11-08 13:54:04.232880 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.232884 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.232888 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.232891 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232895 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232899 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232902 | orchestrator | 2025-11-08 13:54:04.232906 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-11-08 13:54:04.232910 | orchestrator | Saturday 08 November 2025 13:46:02 +0000 (0:00:00.756) 0:02:58.454 ***** 2025-11-08 13:54:04.232914 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.232917 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.232921 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.232925 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232929 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232932 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232937 | orchestrator | 2025-11-08 13:54:04.232945 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-11-08 13:54:04.232954 | orchestrator | Saturday 08 November 2025 13:46:03 +0000 (0:00:01.155) 0:02:59.609 ***** 2025-11-08 13:54:04.232960 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.232965 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.232971 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.232976 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.232981 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.232987 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.232993 | orchestrator | 2025-11-08 13:54:04.233018 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-11-08 13:54:04.233025 | orchestrator | Saturday 08 November 2025 13:46:04 +0000 (0:00:00.738) 0:03:00.348 ***** 2025-11-08 13:54:04.233032 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233038 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233044 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233050 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233056 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233063 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233068 | orchestrator | 2025-11-08 13:54:04.233072 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-11-08 13:54:04.233076 | orchestrator | Saturday 08 November 2025 13:46:05 +0000 (0:00:01.198) 0:03:01.547 ***** 2025-11-08 13:54:04.233079 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233083 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233087 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233090 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233094 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233098 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233101 | orchestrator | 2025-11-08 13:54:04.233105 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-11-08 13:54:04.233109 | orchestrator | Saturday 08 November 2025 13:46:05 +0000 (0:00:00.621) 0:03:02.168 ***** 2025-11-08 13:54:04.233121 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233125 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233129 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233132 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233136 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233140 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233143 | orchestrator | 2025-11-08 13:54:04.233147 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-11-08 13:54:04.233151 | orchestrator | Saturday 08 November 2025 13:46:06 +0000 (0:00:01.007) 0:03:03.176 ***** 2025-11-08 13:54:04.233155 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233158 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233162 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233166 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233169 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233173 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233177 | orchestrator | 2025-11-08 13:54:04.233184 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-11-08 13:54:04.233188 | orchestrator | Saturday 08 November 2025 13:46:07 +0000 (0:00:00.724) 0:03:03.900 ***** 2025-11-08 13:54:04.233191 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233195 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233199 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233202 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233206 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233210 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233213 | orchestrator | 2025-11-08 13:54:04.233217 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-11-08 13:54:04.233221 | orchestrator | Saturday 08 November 2025 13:46:08 +0000 (0:00:00.919) 0:03:04.820 ***** 2025-11-08 13:54:04.233225 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233228 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233232 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233236 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.233239 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.233243 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.233247 | orchestrator | 2025-11-08 13:54:04.233251 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-11-08 13:54:04.233254 | orchestrator | Saturday 08 November 2025 13:46:11 +0000 (0:00:03.004) 0:03:07.824 ***** 2025-11-08 13:54:04.233258 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.233262 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.233265 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.233269 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233273 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233277 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233280 | orchestrator | 2025-11-08 13:54:04.233284 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-11-08 13:54:04.233288 | orchestrator | Saturday 08 November 2025 13:46:13 +0000 (0:00:01.652) 0:03:09.477 ***** 2025-11-08 13:54:04.233291 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.233295 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.233299 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.233302 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233306 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233310 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233314 | orchestrator | 2025-11-08 13:54:04.233317 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-11-08 13:54:04.233321 | orchestrator | Saturday 08 November 2025 13:46:14 +0000 (0:00:01.217) 0:03:10.694 ***** 2025-11-08 13:54:04.233325 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233328 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233354 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233359 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233363 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233366 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233370 | orchestrator | 2025-11-08 13:54:04.233374 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-11-08 13:54:04.233378 | orchestrator | Saturday 08 November 2025 13:46:15 +0000 (0:00:01.167) 0:03:11.861 ***** 2025-11-08 13:54:04.233381 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.233385 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.233389 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.233393 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233419 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233428 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233433 | orchestrator | 2025-11-08 13:54:04.233440 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-11-08 13:54:04.233445 | orchestrator | Saturday 08 November 2025 13:46:16 +0000 (0:00:00.931) 0:03:12.792 ***** 2025-11-08 13:54:04.233453 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-11-08 13:54:04.233461 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-11-08 13:54:04.233467 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233473 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-11-08 13:54:04.233479 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-11-08 13:54:04.233490 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233496 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-11-08 13:54:04.233502 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-11-08 13:54:04.233507 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233512 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233517 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233523 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233529 | orchestrator | 2025-11-08 13:54:04.233535 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-11-08 13:54:04.233547 | orchestrator | Saturday 08 November 2025 13:46:17 +0000 (0:00:01.224) 0:03:14.017 ***** 2025-11-08 13:54:04.233553 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233559 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233565 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233571 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233577 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233583 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233589 | orchestrator | 2025-11-08 13:54:04.233592 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-11-08 13:54:04.233596 | orchestrator | Saturday 08 November 2025 13:46:18 +0000 (0:00:01.127) 0:03:15.145 ***** 2025-11-08 13:54:04.233600 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233604 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233607 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233611 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233615 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233618 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233622 | orchestrator | 2025-11-08 13:54:04.233626 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-08 13:54:04.233630 | orchestrator | Saturday 08 November 2025 13:46:19 +0000 (0:00:01.188) 0:03:16.333 ***** 2025-11-08 13:54:04.233633 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233637 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233641 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233645 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233648 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233652 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233656 | orchestrator | 2025-11-08 13:54:04.233659 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-08 13:54:04.233663 | orchestrator | Saturday 08 November 2025 13:46:20 +0000 (0:00:00.768) 0:03:17.102 ***** 2025-11-08 13:54:04.233667 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233671 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233674 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233678 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233682 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233685 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233689 | orchestrator | 2025-11-08 13:54:04.233693 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-08 13:54:04.233716 | orchestrator | Saturday 08 November 2025 13:46:21 +0000 (0:00:00.861) 0:03:17.963 ***** 2025-11-08 13:54:04.233721 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233725 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.233728 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.233732 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233736 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233739 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233743 | orchestrator | 2025-11-08 13:54:04.233747 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-08 13:54:04.233751 | orchestrator | Saturday 08 November 2025 13:46:22 +0000 (0:00:00.628) 0:03:18.591 ***** 2025-11-08 13:54:04.233755 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.233758 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.233762 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233766 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.233770 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233773 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233777 | orchestrator | 2025-11-08 13:54:04.233781 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-08 13:54:04.233785 | orchestrator | Saturday 08 November 2025 13:46:23 +0000 (0:00:00.962) 0:03:19.553 ***** 2025-11-08 13:54:04.233793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.233797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.233801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.233807 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233814 | orchestrator | 2025-11-08 13:54:04.233820 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-08 13:54:04.233826 | orchestrator | Saturday 08 November 2025 13:46:23 +0000 (0:00:00.362) 0:03:19.916 ***** 2025-11-08 13:54:04.233832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.233838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.233844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.233850 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233857 | orchestrator | 2025-11-08 13:54:04.233863 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-08 13:54:04.233873 | orchestrator | Saturday 08 November 2025 13:46:23 +0000 (0:00:00.388) 0:03:20.304 ***** 2025-11-08 13:54:04.233880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.233885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.233889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.233893 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.233897 | orchestrator | 2025-11-08 13:54:04.233900 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-08 13:54:04.233904 | orchestrator | Saturday 08 November 2025 13:46:24 +0000 (0:00:00.375) 0:03:20.680 ***** 2025-11-08 13:54:04.233908 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.233911 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.233915 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.233919 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233922 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233926 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233930 | orchestrator | 2025-11-08 13:54:04.233934 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-08 13:54:04.233937 | orchestrator | Saturday 08 November 2025 13:46:24 +0000 (0:00:00.575) 0:03:21.255 ***** 2025-11-08 13:54:04.233941 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-08 13:54:04.233945 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-08 13:54:04.233948 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-08 13:54:04.233953 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-11-08 13:54:04.233956 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.233960 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-11-08 13:54:04.233963 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.233967 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-11-08 13:54:04.233971 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.233974 | orchestrator | 2025-11-08 13:54:04.233978 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-11-08 13:54:04.233982 | orchestrator | Saturday 08 November 2025 13:46:27 +0000 (0:00:02.414) 0:03:23.670 ***** 2025-11-08 13:54:04.233986 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.233989 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.233993 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.233997 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.234000 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.234004 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.234008 | orchestrator | 2025-11-08 13:54:04.234011 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-08 13:54:04.234033 | orchestrator | Saturday 08 November 2025 13:46:29 +0000 (0:00:02.603) 0:03:26.274 ***** 2025-11-08 13:54:04.234037 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.234041 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.234048 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.234052 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.234056 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.234059 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.234063 | orchestrator | 2025-11-08 13:54:04.234067 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-11-08 13:54:04.234071 | orchestrator | Saturday 08 November 2025 13:46:31 +0000 (0:00:01.357) 0:03:27.631 ***** 2025-11-08 13:54:04.234075 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234078 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.234082 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.234086 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.234090 | orchestrator | 2025-11-08 13:54:04.234094 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-11-08 13:54:04.234113 | orchestrator | Saturday 08 November 2025 13:46:32 +0000 (0:00:01.101) 0:03:28.732 ***** 2025-11-08 13:54:04.234117 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.234121 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.234125 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.234129 | orchestrator | 2025-11-08 13:54:04.234132 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-11-08 13:54:04.234136 | orchestrator | Saturday 08 November 2025 13:46:32 +0000 (0:00:00.387) 0:03:29.120 ***** 2025-11-08 13:54:04.234140 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.234144 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.234147 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.234151 | orchestrator | 2025-11-08 13:54:04.234155 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-11-08 13:54:04.234159 | orchestrator | Saturday 08 November 2025 13:46:34 +0000 (0:00:01.317) 0:03:30.437 ***** 2025-11-08 13:54:04.234162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-08 13:54:04.234166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-08 13:54:04.234170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-08 13:54:04.234174 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.234178 | orchestrator | 2025-11-08 13:54:04.234181 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-11-08 13:54:04.234185 | orchestrator | Saturday 08 November 2025 13:46:35 +0000 (0:00:01.159) 0:03:31.596 ***** 2025-11-08 13:54:04.234189 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.234193 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.234196 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.234200 | orchestrator | 2025-11-08 13:54:04.234204 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-11-08 13:54:04.234208 | orchestrator | Saturday 08 November 2025 13:46:35 +0000 (0:00:00.367) 0:03:31.963 ***** 2025-11-08 13:54:04.234211 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.234215 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.234219 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.234223 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.234226 | orchestrator | 2025-11-08 13:54:04.234230 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-11-08 13:54:04.234237 | orchestrator | Saturday 08 November 2025 13:46:36 +0000 (0:00:01.056) 0:03:33.020 ***** 2025-11-08 13:54:04.234241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.234245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.234249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.234252 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234256 | orchestrator | 2025-11-08 13:54:04.234260 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-11-08 13:54:04.234269 | orchestrator | Saturday 08 November 2025 13:46:37 +0000 (0:00:00.426) 0:03:33.447 ***** 2025-11-08 13:54:04.234272 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234276 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.234280 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.234283 | orchestrator | 2025-11-08 13:54:04.234287 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-11-08 13:54:04.234291 | orchestrator | Saturday 08 November 2025 13:46:37 +0000 (0:00:00.348) 0:03:33.795 ***** 2025-11-08 13:54:04.234295 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234298 | orchestrator | 2025-11-08 13:54:04.234302 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-11-08 13:54:04.234306 | orchestrator | Saturday 08 November 2025 13:46:37 +0000 (0:00:00.216) 0:03:34.012 ***** 2025-11-08 13:54:04.234310 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234313 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.234317 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.234321 | orchestrator | 2025-11-08 13:54:04.234325 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-11-08 13:54:04.234328 | orchestrator | Saturday 08 November 2025 13:46:38 +0000 (0:00:00.369) 0:03:34.381 ***** 2025-11-08 13:54:04.234332 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234336 | orchestrator | 2025-11-08 13:54:04.234375 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-11-08 13:54:04.234379 | orchestrator | Saturday 08 November 2025 13:46:38 +0000 (0:00:00.218) 0:03:34.600 ***** 2025-11-08 13:54:04.234383 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234387 | orchestrator | 2025-11-08 13:54:04.234390 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-11-08 13:54:04.234394 | orchestrator | Saturday 08 November 2025 13:46:38 +0000 (0:00:00.219) 0:03:34.820 ***** 2025-11-08 13:54:04.234398 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234402 | orchestrator | 2025-11-08 13:54:04.234405 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-11-08 13:54:04.234409 | orchestrator | Saturday 08 November 2025 13:46:38 +0000 (0:00:00.125) 0:03:34.945 ***** 2025-11-08 13:54:04.234413 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234416 | orchestrator | 2025-11-08 13:54:04.234420 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-11-08 13:54:04.234424 | orchestrator | Saturday 08 November 2025 13:46:39 +0000 (0:00:00.759) 0:03:35.705 ***** 2025-11-08 13:54:04.234428 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234431 | orchestrator | 2025-11-08 13:54:04.234435 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-11-08 13:54:04.234439 | orchestrator | Saturday 08 November 2025 13:46:39 +0000 (0:00:00.224) 0:03:35.929 ***** 2025-11-08 13:54:04.234442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.234446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.234450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.234454 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234458 | orchestrator | 2025-11-08 13:54:04.234461 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-11-08 13:54:04.234478 | orchestrator | Saturday 08 November 2025 13:46:40 +0000 (0:00:00.425) 0:03:36.355 ***** 2025-11-08 13:54:04.234483 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234487 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.234490 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.234494 | orchestrator | 2025-11-08 13:54:04.234498 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-11-08 13:54:04.234502 | orchestrator | Saturday 08 November 2025 13:46:40 +0000 (0:00:00.322) 0:03:36.677 ***** 2025-11-08 13:54:04.234505 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234509 | orchestrator | 2025-11-08 13:54:04.234513 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-11-08 13:54:04.234522 | orchestrator | Saturday 08 November 2025 13:46:40 +0000 (0:00:00.266) 0:03:36.944 ***** 2025-11-08 13:54:04.234525 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234529 | orchestrator | 2025-11-08 13:54:04.234533 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-11-08 13:54:04.234537 | orchestrator | Saturday 08 November 2025 13:46:40 +0000 (0:00:00.219) 0:03:37.163 ***** 2025-11-08 13:54:04.234540 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.234544 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.234548 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.234552 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.234555 | orchestrator | 2025-11-08 13:54:04.234559 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-11-08 13:54:04.234563 | orchestrator | Saturday 08 November 2025 13:46:41 +0000 (0:00:01.158) 0:03:38.322 ***** 2025-11-08 13:54:04.234567 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.234570 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.234574 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.234578 | orchestrator | 2025-11-08 13:54:04.234582 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-11-08 13:54:04.234586 | orchestrator | Saturday 08 November 2025 13:46:42 +0000 (0:00:00.358) 0:03:38.680 ***** 2025-11-08 13:54:04.234589 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.234593 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.234597 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.234601 | orchestrator | 2025-11-08 13:54:04.234610 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-11-08 13:54:04.234614 | orchestrator | Saturday 08 November 2025 13:46:43 +0000 (0:00:01.416) 0:03:40.097 ***** 2025-11-08 13:54:04.234617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.234621 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.234625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.234629 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234632 | orchestrator | 2025-11-08 13:54:04.234636 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-11-08 13:54:04.234640 | orchestrator | Saturday 08 November 2025 13:46:44 +0000 (0:00:00.967) 0:03:41.065 ***** 2025-11-08 13:54:04.234644 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.234647 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.234651 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.234655 | orchestrator | 2025-11-08 13:54:04.234659 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-11-08 13:54:04.234662 | orchestrator | Saturday 08 November 2025 13:46:45 +0000 (0:00:00.563) 0:03:41.628 ***** 2025-11-08 13:54:04.234666 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.234670 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.234674 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.234677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.234681 | orchestrator | 2025-11-08 13:54:04.234685 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-11-08 13:54:04.234689 | orchestrator | Saturday 08 November 2025 13:46:46 +0000 (0:00:00.923) 0:03:42.552 ***** 2025-11-08 13:54:04.234692 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.234696 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.234700 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.234704 | orchestrator | 2025-11-08 13:54:04.234707 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-11-08 13:54:04.234711 | orchestrator | Saturday 08 November 2025 13:46:46 +0000 (0:00:00.552) 0:03:43.105 ***** 2025-11-08 13:54:04.234715 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.234722 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.234726 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.234730 | orchestrator | 2025-11-08 13:54:04.234733 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-11-08 13:54:04.234737 | orchestrator | Saturday 08 November 2025 13:46:48 +0000 (0:00:01.253) 0:03:44.359 ***** 2025-11-08 13:54:04.234741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.234745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.234748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.234752 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234756 | orchestrator | 2025-11-08 13:54:04.234760 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-11-08 13:54:04.234763 | orchestrator | Saturday 08 November 2025 13:46:48 +0000 (0:00:00.607) 0:03:44.966 ***** 2025-11-08 13:54:04.234767 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.234771 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.234775 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.234778 | orchestrator | 2025-11-08 13:54:04.234782 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-11-08 13:54:04.234786 | orchestrator | Saturday 08 November 2025 13:46:49 +0000 (0:00:00.401) 0:03:45.367 ***** 2025-11-08 13:54:04.234790 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234793 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.234797 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.234801 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.234805 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.234820 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.234824 | orchestrator | 2025-11-08 13:54:04.234828 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-11-08 13:54:04.234832 | orchestrator | Saturday 08 November 2025 13:46:49 +0000 (0:00:00.924) 0:03:46.292 ***** 2025-11-08 13:54:04.234835 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.234839 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.234843 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.234847 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.234850 | orchestrator | 2025-11-08 13:54:04.234854 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-11-08 13:54:04.234858 | orchestrator | Saturday 08 November 2025 13:46:50 +0000 (0:00:00.911) 0:03:47.203 ***** 2025-11-08 13:54:04.234862 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.234865 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.234869 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.234873 | orchestrator | 2025-11-08 13:54:04.234876 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-11-08 13:54:04.234880 | orchestrator | Saturday 08 November 2025 13:46:51 +0000 (0:00:00.633) 0:03:47.837 ***** 2025-11-08 13:54:04.234884 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.234888 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.234891 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.234895 | orchestrator | 2025-11-08 13:54:04.234899 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-11-08 13:54:04.234903 | orchestrator | Saturday 08 November 2025 13:46:52 +0000 (0:00:01.281) 0:03:49.118 ***** 2025-11-08 13:54:04.234906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-08 13:54:04.234910 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-08 13:54:04.234914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-08 13:54:04.234918 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.234921 | orchestrator | 2025-11-08 13:54:04.234925 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-11-08 13:54:04.234929 | orchestrator | Saturday 08 November 2025 13:46:53 +0000 (0:00:00.657) 0:03:49.776 ***** 2025-11-08 13:54:04.234936 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.234943 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.234946 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.234950 | orchestrator | 2025-11-08 13:54:04.234954 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-11-08 13:54:04.234958 | orchestrator | 2025-11-08 13:54:04.234961 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-08 13:54:04.234965 | orchestrator | Saturday 08 November 2025 13:46:53 +0000 (0:00:00.539) 0:03:50.316 ***** 2025-11-08 13:54:04.234969 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.234973 | orchestrator | 2025-11-08 13:54:04.234977 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-08 13:54:04.234980 | orchestrator | Saturday 08 November 2025 13:46:54 +0000 (0:00:00.789) 0:03:51.105 ***** 2025-11-08 13:54:04.234984 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.234988 | orchestrator | 2025-11-08 13:54:04.234992 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-08 13:54:04.234996 | orchestrator | Saturday 08 November 2025 13:46:55 +0000 (0:00:00.581) 0:03:51.687 ***** 2025-11-08 13:54:04.234999 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235003 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235007 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235011 | orchestrator | 2025-11-08 13:54:04.235014 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-08 13:54:04.235018 | orchestrator | Saturday 08 November 2025 13:46:56 +0000 (0:00:01.352) 0:03:53.039 ***** 2025-11-08 13:54:04.235022 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235025 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235029 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235033 | orchestrator | 2025-11-08 13:54:04.235037 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-08 13:54:04.235040 | orchestrator | Saturday 08 November 2025 13:46:57 +0000 (0:00:00.352) 0:03:53.392 ***** 2025-11-08 13:54:04.235044 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235048 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235052 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235055 | orchestrator | 2025-11-08 13:54:04.235059 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-08 13:54:04.235063 | orchestrator | Saturday 08 November 2025 13:46:57 +0000 (0:00:00.351) 0:03:53.744 ***** 2025-11-08 13:54:04.235066 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235070 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235074 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235078 | orchestrator | 2025-11-08 13:54:04.235081 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-08 13:54:04.235085 | orchestrator | Saturday 08 November 2025 13:46:57 +0000 (0:00:00.310) 0:03:54.054 ***** 2025-11-08 13:54:04.235089 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235093 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235096 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235100 | orchestrator | 2025-11-08 13:54:04.235104 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-08 13:54:04.235107 | orchestrator | Saturday 08 November 2025 13:46:58 +0000 (0:00:01.169) 0:03:55.223 ***** 2025-11-08 13:54:04.235111 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235115 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235119 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235122 | orchestrator | 2025-11-08 13:54:04.235126 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-08 13:54:04.235130 | orchestrator | Saturday 08 November 2025 13:46:59 +0000 (0:00:00.350) 0:03:55.573 ***** 2025-11-08 13:54:04.235148 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235153 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235156 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235160 | orchestrator | 2025-11-08 13:54:04.235164 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-08 13:54:04.235168 | orchestrator | Saturday 08 November 2025 13:46:59 +0000 (0:00:00.344) 0:03:55.918 ***** 2025-11-08 13:54:04.235171 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235175 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235179 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235183 | orchestrator | 2025-11-08 13:54:04.235187 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-08 13:54:04.235190 | orchestrator | Saturday 08 November 2025 13:47:00 +0000 (0:00:00.813) 0:03:56.732 ***** 2025-11-08 13:54:04.235194 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235198 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235201 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235205 | orchestrator | 2025-11-08 13:54:04.235209 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-08 13:54:04.235212 | orchestrator | Saturday 08 November 2025 13:47:01 +0000 (0:00:01.126) 0:03:57.858 ***** 2025-11-08 13:54:04.235216 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235220 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235224 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235227 | orchestrator | 2025-11-08 13:54:04.235231 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-08 13:54:04.235235 | orchestrator | Saturday 08 November 2025 13:47:01 +0000 (0:00:00.323) 0:03:58.181 ***** 2025-11-08 13:54:04.235239 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235242 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235246 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235250 | orchestrator | 2025-11-08 13:54:04.235254 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-08 13:54:04.235257 | orchestrator | Saturday 08 November 2025 13:47:02 +0000 (0:00:00.395) 0:03:58.577 ***** 2025-11-08 13:54:04.235261 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235265 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235269 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235272 | orchestrator | 2025-11-08 13:54:04.235276 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-08 13:54:04.235282 | orchestrator | Saturday 08 November 2025 13:47:02 +0000 (0:00:00.338) 0:03:58.916 ***** 2025-11-08 13:54:04.235286 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235290 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235293 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235297 | orchestrator | 2025-11-08 13:54:04.235303 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-08 13:54:04.235310 | orchestrator | Saturday 08 November 2025 13:47:02 +0000 (0:00:00.330) 0:03:59.247 ***** 2025-11-08 13:54:04.235316 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235323 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235329 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235335 | orchestrator | 2025-11-08 13:54:04.235355 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-08 13:54:04.235361 | orchestrator | Saturday 08 November 2025 13:47:03 +0000 (0:00:00.631) 0:03:59.878 ***** 2025-11-08 13:54:04.235366 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235372 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235378 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235383 | orchestrator | 2025-11-08 13:54:04.235388 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-08 13:54:04.235394 | orchestrator | Saturday 08 November 2025 13:47:03 +0000 (0:00:00.304) 0:04:00.183 ***** 2025-11-08 13:54:04.235401 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235412 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235418 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235425 | orchestrator | 2025-11-08 13:54:04.235430 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-08 13:54:04.235436 | orchestrator | Saturday 08 November 2025 13:47:04 +0000 (0:00:00.340) 0:04:00.524 ***** 2025-11-08 13:54:04.235443 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235447 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235451 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235455 | orchestrator | 2025-11-08 13:54:04.235458 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-08 13:54:04.235462 | orchestrator | Saturday 08 November 2025 13:47:04 +0000 (0:00:00.387) 0:04:00.912 ***** 2025-11-08 13:54:04.235466 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235470 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235473 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235477 | orchestrator | 2025-11-08 13:54:04.235481 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-08 13:54:04.235484 | orchestrator | Saturday 08 November 2025 13:47:05 +0000 (0:00:00.652) 0:04:01.564 ***** 2025-11-08 13:54:04.235488 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235492 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235495 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235499 | orchestrator | 2025-11-08 13:54:04.235503 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-11-08 13:54:04.235506 | orchestrator | Saturday 08 November 2025 13:47:05 +0000 (0:00:00.699) 0:04:02.263 ***** 2025-11-08 13:54:04.235510 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235514 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235517 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235521 | orchestrator | 2025-11-08 13:54:04.235525 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-11-08 13:54:04.235529 | orchestrator | Saturday 08 November 2025 13:47:06 +0000 (0:00:00.482) 0:04:02.745 ***** 2025-11-08 13:54:04.235532 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.235537 | orchestrator | 2025-11-08 13:54:04.235541 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-11-08 13:54:04.235544 | orchestrator | Saturday 08 November 2025 13:47:07 +0000 (0:00:01.102) 0:04:03.848 ***** 2025-11-08 13:54:04.235548 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235552 | orchestrator | 2025-11-08 13:54:04.235572 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-11-08 13:54:04.235576 | orchestrator | Saturday 08 November 2025 13:47:07 +0000 (0:00:00.183) 0:04:04.031 ***** 2025-11-08 13:54:04.235580 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-11-08 13:54:04.235584 | orchestrator | 2025-11-08 13:54:04.235587 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-11-08 13:54:04.235591 | orchestrator | Saturday 08 November 2025 13:47:08 +0000 (0:00:01.226) 0:04:05.257 ***** 2025-11-08 13:54:04.235595 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235599 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235602 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235606 | orchestrator | 2025-11-08 13:54:04.235610 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-11-08 13:54:04.235613 | orchestrator | Saturday 08 November 2025 13:47:09 +0000 (0:00:00.640) 0:04:05.898 ***** 2025-11-08 13:54:04.235617 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235621 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235625 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235628 | orchestrator | 2025-11-08 13:54:04.235632 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-11-08 13:54:04.235636 | orchestrator | Saturday 08 November 2025 13:47:10 +0000 (0:00:00.531) 0:04:06.429 ***** 2025-11-08 13:54:04.235639 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.235647 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.235651 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.235655 | orchestrator | 2025-11-08 13:54:04.235658 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-11-08 13:54:04.235662 | orchestrator | Saturday 08 November 2025 13:47:11 +0000 (0:00:01.813) 0:04:08.243 ***** 2025-11-08 13:54:04.235666 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.235670 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.235673 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.235677 | orchestrator | 2025-11-08 13:54:04.235681 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-11-08 13:54:04.235684 | orchestrator | Saturday 08 November 2025 13:47:12 +0000 (0:00:01.019) 0:04:09.262 ***** 2025-11-08 13:54:04.235688 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.235692 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.235696 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.235699 | orchestrator | 2025-11-08 13:54:04.235707 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-11-08 13:54:04.235711 | orchestrator | Saturday 08 November 2025 13:47:14 +0000 (0:00:01.190) 0:04:10.453 ***** 2025-11-08 13:54:04.235715 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235718 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235722 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235726 | orchestrator | 2025-11-08 13:54:04.235730 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-11-08 13:54:04.235733 | orchestrator | Saturday 08 November 2025 13:47:14 +0000 (0:00:00.782) 0:04:11.236 ***** 2025-11-08 13:54:04.235737 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.235741 | orchestrator | 2025-11-08 13:54:04.235745 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-11-08 13:54:04.235748 | orchestrator | Saturday 08 November 2025 13:47:16 +0000 (0:00:01.689) 0:04:12.925 ***** 2025-11-08 13:54:04.235752 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235756 | orchestrator | 2025-11-08 13:54:04.235759 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-11-08 13:54:04.235763 | orchestrator | Saturday 08 November 2025 13:47:17 +0000 (0:00:00.689) 0:04:13.615 ***** 2025-11-08 13:54:04.235767 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-08 13:54:04.235771 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.235774 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.235778 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-08 13:54:04.235782 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-11-08 13:54:04.235786 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-08 13:54:04.235789 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-08 13:54:04.235793 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-11-08 13:54:04.235797 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-11-08 13:54:04.235801 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-11-08 13:54:04.235807 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-08 13:54:04.235813 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-11-08 13:54:04.235819 | orchestrator | 2025-11-08 13:54:04.235825 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-11-08 13:54:04.235831 | orchestrator | Saturday 08 November 2025 13:47:21 +0000 (0:00:03.882) 0:04:17.497 ***** 2025-11-08 13:54:04.235837 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.235843 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.235849 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.235856 | orchestrator | 2025-11-08 13:54:04.235859 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-11-08 13:54:04.235867 | orchestrator | Saturday 08 November 2025 13:47:22 +0000 (0:00:01.615) 0:04:19.113 ***** 2025-11-08 13:54:04.235870 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235874 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235878 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235882 | orchestrator | 2025-11-08 13:54:04.235885 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-11-08 13:54:04.235889 | orchestrator | Saturday 08 November 2025 13:47:23 +0000 (0:00:00.338) 0:04:19.451 ***** 2025-11-08 13:54:04.235893 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.235897 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.235900 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.235904 | orchestrator | 2025-11-08 13:54:04.235908 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-11-08 13:54:04.235911 | orchestrator | Saturday 08 November 2025 13:47:23 +0000 (0:00:00.499) 0:04:19.950 ***** 2025-11-08 13:54:04.235915 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.235934 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.235938 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.235942 | orchestrator | 2025-11-08 13:54:04.235946 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-11-08 13:54:04.235950 | orchestrator | Saturday 08 November 2025 13:47:25 +0000 (0:00:01.854) 0:04:21.804 ***** 2025-11-08 13:54:04.235953 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.235957 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.235961 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.235965 | orchestrator | 2025-11-08 13:54:04.235969 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-11-08 13:54:04.235972 | orchestrator | Saturday 08 November 2025 13:47:26 +0000 (0:00:01.487) 0:04:23.292 ***** 2025-11-08 13:54:04.235976 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.235980 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.235983 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.235987 | orchestrator | 2025-11-08 13:54:04.235991 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-11-08 13:54:04.235995 | orchestrator | Saturday 08 November 2025 13:47:27 +0000 (0:00:00.260) 0:04:23.553 ***** 2025-11-08 13:54:04.235998 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.236002 | orchestrator | 2025-11-08 13:54:04.236006 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-11-08 13:54:04.236010 | orchestrator | Saturday 08 November 2025 13:47:27 +0000 (0:00:00.660) 0:04:24.214 ***** 2025-11-08 13:54:04.236014 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236018 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236021 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236025 | orchestrator | 2025-11-08 13:54:04.236029 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-11-08 13:54:04.236033 | orchestrator | Saturday 08 November 2025 13:47:28 +0000 (0:00:00.290) 0:04:24.504 ***** 2025-11-08 13:54:04.236037 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236040 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236044 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236048 | orchestrator | 2025-11-08 13:54:04.236052 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-11-08 13:54:04.236059 | orchestrator | Saturday 08 November 2025 13:47:28 +0000 (0:00:00.254) 0:04:24.759 ***** 2025-11-08 13:54:04.236063 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.236067 | orchestrator | 2025-11-08 13:54:04.236070 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-11-08 13:54:04.236074 | orchestrator | Saturday 08 November 2025 13:47:29 +0000 (0:00:00.643) 0:04:25.402 ***** 2025-11-08 13:54:04.236078 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.236082 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.236090 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.236094 | orchestrator | 2025-11-08 13:54:04.236097 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-11-08 13:54:04.236101 | orchestrator | Saturday 08 November 2025 13:47:30 +0000 (0:00:01.554) 0:04:26.957 ***** 2025-11-08 13:54:04.236105 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.236109 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.236112 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.236116 | orchestrator | 2025-11-08 13:54:04.236120 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-11-08 13:54:04.236124 | orchestrator | Saturday 08 November 2025 13:47:31 +0000 (0:00:01.273) 0:04:28.230 ***** 2025-11-08 13:54:04.236127 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.236131 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.236135 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.236139 | orchestrator | 2025-11-08 13:54:04.236142 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-11-08 13:54:04.236146 | orchestrator | Saturday 08 November 2025 13:47:33 +0000 (0:00:01.777) 0:04:30.008 ***** 2025-11-08 13:54:04.236150 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.236154 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.236157 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.236161 | orchestrator | 2025-11-08 13:54:04.236165 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-11-08 13:54:04.236168 | orchestrator | Saturday 08 November 2025 13:47:36 +0000 (0:00:02.367) 0:04:32.375 ***** 2025-11-08 13:54:04.236172 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.236176 | orchestrator | 2025-11-08 13:54:04.236180 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-11-08 13:54:04.236184 | orchestrator | Saturday 08 November 2025 13:47:36 +0000 (0:00:00.596) 0:04:32.972 ***** 2025-11-08 13:54:04.236187 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236191 | orchestrator | 2025-11-08 13:54:04.236195 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-11-08 13:54:04.236199 | orchestrator | Saturday 08 November 2025 13:47:37 +0000 (0:00:01.267) 0:04:34.240 ***** 2025-11-08 13:54:04.236202 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236206 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236210 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236213 | orchestrator | 2025-11-08 13:54:04.236217 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-11-08 13:54:04.236221 | orchestrator | Saturday 08 November 2025 13:47:47 +0000 (0:00:09.967) 0:04:44.208 ***** 2025-11-08 13:54:04.236225 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236229 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236232 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236236 | orchestrator | 2025-11-08 13:54:04.236240 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-11-08 13:54:04.236244 | orchestrator | Saturday 08 November 2025 13:47:48 +0000 (0:00:00.586) 0:04:44.794 ***** 2025-11-08 13:54:04.236261 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bf3f1276fc4b45136fd7cd4dd483971821b37e86'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-11-08 13:54:04.236268 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bf3f1276fc4b45136fd7cd4dd483971821b37e86'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-11-08 13:54:04.236278 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bf3f1276fc4b45136fd7cd4dd483971821b37e86'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-11-08 13:54:04.236283 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bf3f1276fc4b45136fd7cd4dd483971821b37e86'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-11-08 13:54:04.236290 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bf3f1276fc4b45136fd7cd4dd483971821b37e86'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-11-08 13:54:04.236295 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bf3f1276fc4b45136fd7cd4dd483971821b37e86'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__bf3f1276fc4b45136fd7cd4dd483971821b37e86'}])  2025-11-08 13:54:04.236301 | orchestrator | 2025-11-08 13:54:04.236305 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-08 13:54:04.236308 | orchestrator | Saturday 08 November 2025 13:48:02 +0000 (0:00:14.317) 0:04:59.112 ***** 2025-11-08 13:54:04.236312 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236316 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236320 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236323 | orchestrator | 2025-11-08 13:54:04.236327 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-11-08 13:54:04.236331 | orchestrator | Saturday 08 November 2025 13:48:03 +0000 (0:00:00.329) 0:04:59.441 ***** 2025-11-08 13:54:04.236335 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.236368 | orchestrator | 2025-11-08 13:54:04.236373 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-11-08 13:54:04.236376 | orchestrator | Saturday 08 November 2025 13:48:03 +0000 (0:00:00.856) 0:05:00.297 ***** 2025-11-08 13:54:04.236380 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236384 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236388 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236391 | orchestrator | 2025-11-08 13:54:04.236395 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-11-08 13:54:04.236399 | orchestrator | Saturday 08 November 2025 13:48:04 +0000 (0:00:00.369) 0:05:00.667 ***** 2025-11-08 13:54:04.236402 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236406 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236410 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236414 | orchestrator | 2025-11-08 13:54:04.236417 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-11-08 13:54:04.236421 | orchestrator | Saturday 08 November 2025 13:48:04 +0000 (0:00:00.316) 0:05:00.983 ***** 2025-11-08 13:54:04.236425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-08 13:54:04.236429 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-08 13:54:04.236432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-08 13:54:04.236436 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236440 | orchestrator | 2025-11-08 13:54:04.236449 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-11-08 13:54:04.236452 | orchestrator | Saturday 08 November 2025 13:48:05 +0000 (0:00:01.116) 0:05:02.100 ***** 2025-11-08 13:54:04.236456 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236460 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236464 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236467 | orchestrator | 2025-11-08 13:54:04.236471 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-11-08 13:54:04.236475 | orchestrator | 2025-11-08 13:54:04.236492 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-08 13:54:04.236497 | orchestrator | Saturday 08 November 2025 13:48:06 +0000 (0:00:00.555) 0:05:02.655 ***** 2025-11-08 13:54:04.236501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.236504 | orchestrator | 2025-11-08 13:54:04.236508 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-08 13:54:04.236512 | orchestrator | Saturday 08 November 2025 13:48:06 +0000 (0:00:00.496) 0:05:03.152 ***** 2025-11-08 13:54:04.236516 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.236520 | orchestrator | 2025-11-08 13:54:04.236523 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-08 13:54:04.236527 | orchestrator | Saturday 08 November 2025 13:48:07 +0000 (0:00:00.679) 0:05:03.831 ***** 2025-11-08 13:54:04.236531 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236535 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236538 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236542 | orchestrator | 2025-11-08 13:54:04.236546 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-08 13:54:04.236549 | orchestrator | Saturday 08 November 2025 13:48:08 +0000 (0:00:00.689) 0:05:04.521 ***** 2025-11-08 13:54:04.236553 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236557 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236561 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236564 | orchestrator | 2025-11-08 13:54:04.236568 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-08 13:54:04.236572 | orchestrator | Saturday 08 November 2025 13:48:08 +0000 (0:00:00.275) 0:05:04.796 ***** 2025-11-08 13:54:04.236576 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236579 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236583 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236587 | orchestrator | 2025-11-08 13:54:04.236590 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-08 13:54:04.236599 | orchestrator | Saturday 08 November 2025 13:48:08 +0000 (0:00:00.439) 0:05:05.236 ***** 2025-11-08 13:54:04.236603 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236606 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236610 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236614 | orchestrator | 2025-11-08 13:54:04.236617 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-08 13:54:04.236621 | orchestrator | Saturday 08 November 2025 13:48:09 +0000 (0:00:00.285) 0:05:05.521 ***** 2025-11-08 13:54:04.236625 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236629 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236632 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236636 | orchestrator | 2025-11-08 13:54:04.236640 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-08 13:54:04.236644 | orchestrator | Saturday 08 November 2025 13:48:09 +0000 (0:00:00.699) 0:05:06.220 ***** 2025-11-08 13:54:04.236647 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236651 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236655 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236659 | orchestrator | 2025-11-08 13:54:04.236662 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-08 13:54:04.236670 | orchestrator | Saturday 08 November 2025 13:48:10 +0000 (0:00:00.607) 0:05:06.828 ***** 2025-11-08 13:54:04.236674 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236677 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236681 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236685 | orchestrator | 2025-11-08 13:54:04.236689 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-08 13:54:04.236692 | orchestrator | Saturday 08 November 2025 13:48:10 +0000 (0:00:00.422) 0:05:07.250 ***** 2025-11-08 13:54:04.236696 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236700 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236703 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236707 | orchestrator | 2025-11-08 13:54:04.236711 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-08 13:54:04.236715 | orchestrator | Saturday 08 November 2025 13:48:11 +0000 (0:00:00.732) 0:05:07.983 ***** 2025-11-08 13:54:04.236718 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236722 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236726 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236729 | orchestrator | 2025-11-08 13:54:04.236733 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-08 13:54:04.236737 | orchestrator | Saturday 08 November 2025 13:48:12 +0000 (0:00:00.714) 0:05:08.698 ***** 2025-11-08 13:54:04.236741 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236744 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236748 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236752 | orchestrator | 2025-11-08 13:54:04.236756 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-08 13:54:04.236759 | orchestrator | Saturday 08 November 2025 13:48:12 +0000 (0:00:00.321) 0:05:09.019 ***** 2025-11-08 13:54:04.236763 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236767 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236771 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236774 | orchestrator | 2025-11-08 13:54:04.236778 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-08 13:54:04.236782 | orchestrator | Saturday 08 November 2025 13:48:13 +0000 (0:00:00.472) 0:05:09.492 ***** 2025-11-08 13:54:04.236785 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236789 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236793 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236797 | orchestrator | 2025-11-08 13:54:04.236800 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-08 13:54:04.236804 | orchestrator | Saturday 08 November 2025 13:48:13 +0000 (0:00:00.258) 0:05:09.751 ***** 2025-11-08 13:54:04.236808 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236811 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236828 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236833 | orchestrator | 2025-11-08 13:54:04.236837 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-08 13:54:04.236840 | orchestrator | Saturday 08 November 2025 13:48:13 +0000 (0:00:00.318) 0:05:10.069 ***** 2025-11-08 13:54:04.236844 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236848 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236851 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236855 | orchestrator | 2025-11-08 13:54:04.236859 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-08 13:54:04.236863 | orchestrator | Saturday 08 November 2025 13:48:13 +0000 (0:00:00.270) 0:05:10.340 ***** 2025-11-08 13:54:04.236866 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236870 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236874 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236877 | orchestrator | 2025-11-08 13:54:04.236881 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-08 13:54:04.236885 | orchestrator | Saturday 08 November 2025 13:48:14 +0000 (0:00:00.261) 0:05:10.601 ***** 2025-11-08 13:54:04.236892 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.236895 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.236899 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.236903 | orchestrator | 2025-11-08 13:54:04.236907 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-08 13:54:04.236910 | orchestrator | Saturday 08 November 2025 13:48:14 +0000 (0:00:00.437) 0:05:11.038 ***** 2025-11-08 13:54:04.236914 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236918 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236922 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236925 | orchestrator | 2025-11-08 13:54:04.236929 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-08 13:54:04.236933 | orchestrator | Saturday 08 November 2025 13:48:14 +0000 (0:00:00.285) 0:05:11.324 ***** 2025-11-08 13:54:04.236937 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236940 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236944 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236948 | orchestrator | 2025-11-08 13:54:04.236951 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-08 13:54:04.236955 | orchestrator | Saturday 08 November 2025 13:48:15 +0000 (0:00:00.296) 0:05:11.620 ***** 2025-11-08 13:54:04.236959 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.236965 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.236969 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.236973 | orchestrator | 2025-11-08 13:54:04.236976 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-11-08 13:54:04.236980 | orchestrator | Saturday 08 November 2025 13:48:15 +0000 (0:00:00.649) 0:05:12.269 ***** 2025-11-08 13:54:04.236984 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-08 13:54:04.236988 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-08 13:54:04.236992 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-08 13:54:04.236995 | orchestrator | 2025-11-08 13:54:04.236999 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-11-08 13:54:04.237003 | orchestrator | Saturday 08 November 2025 13:48:16 +0000 (0:00:00.550) 0:05:12.820 ***** 2025-11-08 13:54:04.237007 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.237011 | orchestrator | 2025-11-08 13:54:04.237014 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-11-08 13:54:04.237018 | orchestrator | Saturday 08 November 2025 13:48:16 +0000 (0:00:00.445) 0:05:13.266 ***** 2025-11-08 13:54:04.237022 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.237026 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.237029 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.237033 | orchestrator | 2025-11-08 13:54:04.237037 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-11-08 13:54:04.237040 | orchestrator | Saturday 08 November 2025 13:48:17 +0000 (0:00:00.681) 0:05:13.948 ***** 2025-11-08 13:54:04.237044 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.237048 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.237052 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.237055 | orchestrator | 2025-11-08 13:54:04.237059 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-11-08 13:54:04.237063 | orchestrator | Saturday 08 November 2025 13:48:18 +0000 (0:00:00.450) 0:05:14.398 ***** 2025-11-08 13:54:04.237066 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-08 13:54:04.237070 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-08 13:54:04.237074 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-08 13:54:04.237078 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-11-08 13:54:04.237082 | orchestrator | 2025-11-08 13:54:04.237085 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-11-08 13:54:04.237093 | orchestrator | Saturday 08 November 2025 13:48:28 +0000 (0:00:10.371) 0:05:24.770 ***** 2025-11-08 13:54:04.237096 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.237100 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.237104 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.237108 | orchestrator | 2025-11-08 13:54:04.237111 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-11-08 13:54:04.237115 | orchestrator | Saturday 08 November 2025 13:48:28 +0000 (0:00:00.336) 0:05:25.106 ***** 2025-11-08 13:54:04.237119 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-08 13:54:04.237122 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-08 13:54:04.237126 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-08 13:54:04.237130 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-08 13:54:04.237134 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.237137 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.237141 | orchestrator | 2025-11-08 13:54:04.237157 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-11-08 13:54:04.237161 | orchestrator | Saturday 08 November 2025 13:48:30 +0000 (0:00:02.132) 0:05:27.239 ***** 2025-11-08 13:54:04.237165 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-08 13:54:04.237169 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-08 13:54:04.237173 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-08 13:54:04.237176 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-08 13:54:04.237180 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-08 13:54:04.237184 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-11-08 13:54:04.237188 | orchestrator | 2025-11-08 13:54:04.237191 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-11-08 13:54:04.237195 | orchestrator | Saturday 08 November 2025 13:48:32 +0000 (0:00:01.291) 0:05:28.530 ***** 2025-11-08 13:54:04.237199 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.237203 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.237206 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.237210 | orchestrator | 2025-11-08 13:54:04.237214 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-11-08 13:54:04.237217 | orchestrator | Saturday 08 November 2025 13:48:33 +0000 (0:00:01.191) 0:05:29.721 ***** 2025-11-08 13:54:04.237221 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.237225 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.237229 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.237232 | orchestrator | 2025-11-08 13:54:04.237236 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-11-08 13:54:04.237240 | orchestrator | Saturday 08 November 2025 13:48:33 +0000 (0:00:00.311) 0:05:30.032 ***** 2025-11-08 13:54:04.237244 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.237247 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.237251 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.237255 | orchestrator | 2025-11-08 13:54:04.237258 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-11-08 13:54:04.237262 | orchestrator | Saturday 08 November 2025 13:48:33 +0000 (0:00:00.293) 0:05:30.326 ***** 2025-11-08 13:54:04.237266 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.237270 | orchestrator | 2025-11-08 13:54:04.237277 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-11-08 13:54:04.237280 | orchestrator | Saturday 08 November 2025 13:48:34 +0000 (0:00:00.681) 0:05:31.008 ***** 2025-11-08 13:54:04.237284 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.237288 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.237292 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.237299 | orchestrator | 2025-11-08 13:54:04.237302 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-11-08 13:54:04.237306 | orchestrator | Saturday 08 November 2025 13:48:34 +0000 (0:00:00.325) 0:05:31.333 ***** 2025-11-08 13:54:04.237310 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.237314 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.237317 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.237321 | orchestrator | 2025-11-08 13:54:04.237325 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-11-08 13:54:04.237328 | orchestrator | Saturday 08 November 2025 13:48:35 +0000 (0:00:00.335) 0:05:31.668 ***** 2025-11-08 13:54:04.237332 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.237336 | orchestrator | 2025-11-08 13:54:04.237353 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-11-08 13:54:04.237357 | orchestrator | Saturday 08 November 2025 13:48:36 +0000 (0:00:00.798) 0:05:32.467 ***** 2025-11-08 13:54:04.237361 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.237364 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.237368 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.237372 | orchestrator | 2025-11-08 13:54:04.237375 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-11-08 13:54:04.237379 | orchestrator | Saturday 08 November 2025 13:48:37 +0000 (0:00:01.269) 0:05:33.737 ***** 2025-11-08 13:54:04.237383 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.237387 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.237390 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.237394 | orchestrator | 2025-11-08 13:54:04.237398 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-11-08 13:54:04.237401 | orchestrator | Saturday 08 November 2025 13:48:38 +0000 (0:00:01.232) 0:05:34.969 ***** 2025-11-08 13:54:04.237405 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.237409 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.237413 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.237416 | orchestrator | 2025-11-08 13:54:04.237420 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-11-08 13:54:04.237424 | orchestrator | Saturday 08 November 2025 13:48:40 +0000 (0:00:01.880) 0:05:36.850 ***** 2025-11-08 13:54:04.237427 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.237431 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.237435 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.237439 | orchestrator | 2025-11-08 13:54:04.237442 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-11-08 13:54:04.237446 | orchestrator | Saturday 08 November 2025 13:48:42 +0000 (0:00:02.337) 0:05:39.188 ***** 2025-11-08 13:54:04.237450 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.237454 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.237457 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-11-08 13:54:04.237461 | orchestrator | 2025-11-08 13:54:04.237465 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-11-08 13:54:04.237469 | orchestrator | Saturday 08 November 2025 13:48:43 +0000 (0:00:00.430) 0:05:39.619 ***** 2025-11-08 13:54:04.237472 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-11-08 13:54:04.237489 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-11-08 13:54:04.237494 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-11-08 13:54:04.237497 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-11-08 13:54:04.237501 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-11-08 13:54:04.237508 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-11-08 13:54:04.237512 | orchestrator | 2025-11-08 13:54:04.237516 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-11-08 13:54:04.237520 | orchestrator | Saturday 08 November 2025 13:49:13 +0000 (0:00:30.275) 0:06:09.895 ***** 2025-11-08 13:54:04.237524 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-11-08 13:54:04.237527 | orchestrator | 2025-11-08 13:54:04.237531 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-11-08 13:54:04.237535 | orchestrator | Saturday 08 November 2025 13:49:14 +0000 (0:00:01.300) 0:06:11.196 ***** 2025-11-08 13:54:04.237539 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.237542 | orchestrator | 2025-11-08 13:54:04.237546 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-11-08 13:54:04.237550 | orchestrator | Saturday 08 November 2025 13:49:15 +0000 (0:00:00.313) 0:06:11.510 ***** 2025-11-08 13:54:04.237554 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.237557 | orchestrator | 2025-11-08 13:54:04.237561 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-11-08 13:54:04.237565 | orchestrator | Saturday 08 November 2025 13:49:15 +0000 (0:00:00.160) 0:06:11.670 ***** 2025-11-08 13:54:04.237569 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-11-08 13:54:04.237572 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-11-08 13:54:04.237576 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-11-08 13:54:04.237580 | orchestrator | 2025-11-08 13:54:04.237586 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-11-08 13:54:04.237590 | orchestrator | Saturday 08 November 2025 13:49:21 +0000 (0:00:06.443) 0:06:18.114 ***** 2025-11-08 13:54:04.237594 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-11-08 13:54:04.237598 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-11-08 13:54:04.237602 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-11-08 13:54:04.237605 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-11-08 13:54:04.237609 | orchestrator | 2025-11-08 13:54:04.237613 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-08 13:54:04.237617 | orchestrator | Saturday 08 November 2025 13:49:26 +0000 (0:00:05.136) 0:06:23.250 ***** 2025-11-08 13:54:04.237620 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.237624 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.237628 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.237632 | orchestrator | 2025-11-08 13:54:04.237636 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-11-08 13:54:04.237639 | orchestrator | Saturday 08 November 2025 13:49:27 +0000 (0:00:00.754) 0:06:24.005 ***** 2025-11-08 13:54:04.237643 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.237647 | orchestrator | 2025-11-08 13:54:04.237651 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-11-08 13:54:04.237654 | orchestrator | Saturday 08 November 2025 13:49:28 +0000 (0:00:00.802) 0:06:24.808 ***** 2025-11-08 13:54:04.237658 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.237662 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.237666 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.237669 | orchestrator | 2025-11-08 13:54:04.237673 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-11-08 13:54:04.237677 | orchestrator | Saturday 08 November 2025 13:49:28 +0000 (0:00:00.323) 0:06:25.131 ***** 2025-11-08 13:54:04.237681 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.237684 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.237688 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.237692 | orchestrator | 2025-11-08 13:54:04.237698 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-11-08 13:54:04.237702 | orchestrator | Saturday 08 November 2025 13:49:29 +0000 (0:00:01.180) 0:06:26.311 ***** 2025-11-08 13:54:04.237706 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-08 13:54:04.237710 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-08 13:54:04.237714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-08 13:54:04.237717 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.237721 | orchestrator | 2025-11-08 13:54:04.237725 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-11-08 13:54:04.237729 | orchestrator | Saturday 08 November 2025 13:49:30 +0000 (0:00:00.625) 0:06:26.937 ***** 2025-11-08 13:54:04.237732 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.237736 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.237740 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.237744 | orchestrator | 2025-11-08 13:54:04.237747 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-11-08 13:54:04.237751 | orchestrator | 2025-11-08 13:54:04.237755 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-08 13:54:04.237759 | orchestrator | Saturday 08 November 2025 13:49:31 +0000 (0:00:00.814) 0:06:27.752 ***** 2025-11-08 13:54:04.237763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.237767 | orchestrator | 2025-11-08 13:54:04.237783 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-08 13:54:04.237788 | orchestrator | Saturday 08 November 2025 13:49:31 +0000 (0:00:00.555) 0:06:28.308 ***** 2025-11-08 13:54:04.237791 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.237795 | orchestrator | 2025-11-08 13:54:04.237799 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-08 13:54:04.237803 | orchestrator | Saturday 08 November 2025 13:49:32 +0000 (0:00:00.720) 0:06:29.029 ***** 2025-11-08 13:54:04.237806 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.237810 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.237814 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.237818 | orchestrator | 2025-11-08 13:54:04.237821 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-08 13:54:04.237825 | orchestrator | Saturday 08 November 2025 13:49:32 +0000 (0:00:00.307) 0:06:29.337 ***** 2025-11-08 13:54:04.237829 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.237833 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.237837 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.237840 | orchestrator | 2025-11-08 13:54:04.237844 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-08 13:54:04.237848 | orchestrator | Saturday 08 November 2025 13:49:33 +0000 (0:00:00.670) 0:06:30.007 ***** 2025-11-08 13:54:04.237852 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.237855 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.237859 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.237863 | orchestrator | 2025-11-08 13:54:04.237867 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-08 13:54:04.237870 | orchestrator | Saturday 08 November 2025 13:49:34 +0000 (0:00:00.666) 0:06:30.673 ***** 2025-11-08 13:54:04.237874 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.237878 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.237881 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.237885 | orchestrator | 2025-11-08 13:54:04.237889 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-08 13:54:04.237893 | orchestrator | Saturday 08 November 2025 13:49:35 +0000 (0:00:00.972) 0:06:31.646 ***** 2025-11-08 13:54:04.237896 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.237903 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.237911 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.237914 | orchestrator | 2025-11-08 13:54:04.237918 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-08 13:54:04.237922 | orchestrator | Saturday 08 November 2025 13:49:35 +0000 (0:00:00.308) 0:06:31.955 ***** 2025-11-08 13:54:04.237926 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.237929 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.237933 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.237937 | orchestrator | 2025-11-08 13:54:04.237940 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-08 13:54:04.237944 | orchestrator | Saturday 08 November 2025 13:49:35 +0000 (0:00:00.308) 0:06:32.263 ***** 2025-11-08 13:54:04.237948 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.237952 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.237955 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.237959 | orchestrator | 2025-11-08 13:54:04.237963 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-08 13:54:04.237966 | orchestrator | Saturday 08 November 2025 13:49:36 +0000 (0:00:00.312) 0:06:32.576 ***** 2025-11-08 13:54:04.237970 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.237974 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.237978 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.237981 | orchestrator | 2025-11-08 13:54:04.237985 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-08 13:54:04.237989 | orchestrator | Saturday 08 November 2025 13:49:37 +0000 (0:00:00.954) 0:06:33.531 ***** 2025-11-08 13:54:04.237993 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.237996 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238000 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238004 | orchestrator | 2025-11-08 13:54:04.238008 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-08 13:54:04.238032 | orchestrator | Saturday 08 November 2025 13:49:37 +0000 (0:00:00.726) 0:06:34.257 ***** 2025-11-08 13:54:04.238037 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238041 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238045 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238048 | orchestrator | 2025-11-08 13:54:04.238052 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-08 13:54:04.238056 | orchestrator | Saturday 08 November 2025 13:49:38 +0000 (0:00:00.299) 0:06:34.557 ***** 2025-11-08 13:54:04.238060 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238063 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238067 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238071 | orchestrator | 2025-11-08 13:54:04.238074 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-08 13:54:04.238078 | orchestrator | Saturday 08 November 2025 13:49:38 +0000 (0:00:00.290) 0:06:34.847 ***** 2025-11-08 13:54:04.238082 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.238086 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238091 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238097 | orchestrator | 2025-11-08 13:54:04.238104 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-08 13:54:04.238109 | orchestrator | Saturday 08 November 2025 13:49:39 +0000 (0:00:00.600) 0:06:35.448 ***** 2025-11-08 13:54:04.238115 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.238121 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238127 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238133 | orchestrator | 2025-11-08 13:54:04.238139 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-08 13:54:04.238145 | orchestrator | Saturday 08 November 2025 13:49:39 +0000 (0:00:00.363) 0:06:35.811 ***** 2025-11-08 13:54:04.238151 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.238157 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238163 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238169 | orchestrator | 2025-11-08 13:54:04.238175 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-08 13:54:04.238189 | orchestrator | Saturday 08 November 2025 13:49:39 +0000 (0:00:00.320) 0:06:36.132 ***** 2025-11-08 13:54:04.238193 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238197 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238201 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238204 | orchestrator | 2025-11-08 13:54:04.238208 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-08 13:54:04.238212 | orchestrator | Saturday 08 November 2025 13:49:40 +0000 (0:00:00.363) 0:06:36.495 ***** 2025-11-08 13:54:04.238216 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238219 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238223 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238227 | orchestrator | 2025-11-08 13:54:04.238231 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-08 13:54:04.238234 | orchestrator | Saturday 08 November 2025 13:49:40 +0000 (0:00:00.572) 0:06:37.067 ***** 2025-11-08 13:54:04.238238 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238242 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238245 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238249 | orchestrator | 2025-11-08 13:54:04.238253 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-08 13:54:04.238257 | orchestrator | Saturday 08 November 2025 13:49:41 +0000 (0:00:00.335) 0:06:37.403 ***** 2025-11-08 13:54:04.238260 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.238264 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238268 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238271 | orchestrator | 2025-11-08 13:54:04.238275 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-08 13:54:04.238279 | orchestrator | Saturday 08 November 2025 13:49:41 +0000 (0:00:00.324) 0:06:37.728 ***** 2025-11-08 13:54:04.238283 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.238286 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238290 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238294 | orchestrator | 2025-11-08 13:54:04.238297 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-11-08 13:54:04.238301 | orchestrator | Saturday 08 November 2025 13:49:42 +0000 (0:00:00.778) 0:06:38.506 ***** 2025-11-08 13:54:04.238305 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.238309 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238312 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238316 | orchestrator | 2025-11-08 13:54:04.238323 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-11-08 13:54:04.238327 | orchestrator | Saturday 08 November 2025 13:49:42 +0000 (0:00:00.358) 0:06:38.864 ***** 2025-11-08 13:54:04.238331 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-08 13:54:04.238335 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-08 13:54:04.238347 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-08 13:54:04.238351 | orchestrator | 2025-11-08 13:54:04.238354 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-11-08 13:54:04.238358 | orchestrator | Saturday 08 November 2025 13:49:43 +0000 (0:00:00.624) 0:06:39.489 ***** 2025-11-08 13:54:04.238362 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.238365 | orchestrator | 2025-11-08 13:54:04.238369 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-11-08 13:54:04.238373 | orchestrator | Saturday 08 November 2025 13:49:43 +0000 (0:00:00.508) 0:06:39.997 ***** 2025-11-08 13:54:04.238377 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238380 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238384 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238388 | orchestrator | 2025-11-08 13:54:04.238392 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-11-08 13:54:04.238399 | orchestrator | Saturday 08 November 2025 13:49:44 +0000 (0:00:00.591) 0:06:40.589 ***** 2025-11-08 13:54:04.238403 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238407 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238411 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238414 | orchestrator | 2025-11-08 13:54:04.238418 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-11-08 13:54:04.238422 | orchestrator | Saturday 08 November 2025 13:49:44 +0000 (0:00:00.303) 0:06:40.892 ***** 2025-11-08 13:54:04.238425 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.238429 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238433 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238437 | orchestrator | 2025-11-08 13:54:04.238440 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-11-08 13:54:04.238444 | orchestrator | Saturday 08 November 2025 13:49:45 +0000 (0:00:00.641) 0:06:41.533 ***** 2025-11-08 13:54:04.238448 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.238452 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238455 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238459 | orchestrator | 2025-11-08 13:54:04.238463 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-11-08 13:54:04.238467 | orchestrator | Saturday 08 November 2025 13:49:45 +0000 (0:00:00.311) 0:06:41.844 ***** 2025-11-08 13:54:04.238470 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-08 13:54:04.238474 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-08 13:54:04.238478 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-08 13:54:04.238482 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-08 13:54:04.238486 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-08 13:54:04.238489 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-08 13:54:04.238497 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-08 13:54:04.238501 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-08 13:54:04.238504 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-08 13:54:04.238508 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-08 13:54:04.238512 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-08 13:54:04.238516 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-08 13:54:04.238519 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-08 13:54:04.238523 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-08 13:54:04.238527 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-08 13:54:04.238531 | orchestrator | 2025-11-08 13:54:04.238534 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-11-08 13:54:04.238538 | orchestrator | Saturday 08 November 2025 13:49:47 +0000 (0:00:02.436) 0:06:44.280 ***** 2025-11-08 13:54:04.238542 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238545 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238549 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238553 | orchestrator | 2025-11-08 13:54:04.238556 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-11-08 13:54:04.238560 | orchestrator | Saturday 08 November 2025 13:49:48 +0000 (0:00:00.306) 0:06:44.587 ***** 2025-11-08 13:54:04.238564 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.238573 | orchestrator | 2025-11-08 13:54:04.238576 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-11-08 13:54:04.238580 | orchestrator | Saturday 08 November 2025 13:49:48 +0000 (0:00:00.523) 0:06:45.111 ***** 2025-11-08 13:54:04.238587 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-08 13:54:04.238591 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-08 13:54:04.238595 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-11-08 13:54:04.238598 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-11-08 13:54:04.238602 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-08 13:54:04.238606 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-11-08 13:54:04.238610 | orchestrator | 2025-11-08 13:54:04.238613 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-11-08 13:54:04.238617 | orchestrator | Saturday 08 November 2025 13:49:50 +0000 (0:00:01.284) 0:06:46.395 ***** 2025-11-08 13:54:04.238621 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.238624 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-08 13:54:04.238628 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-08 13:54:04.238632 | orchestrator | 2025-11-08 13:54:04.238635 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-11-08 13:54:04.238639 | orchestrator | Saturday 08 November 2025 13:49:52 +0000 (0:00:02.014) 0:06:48.410 ***** 2025-11-08 13:54:04.238643 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-08 13:54:04.238647 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-08 13:54:04.238650 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.238654 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-08 13:54:04.238658 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-08 13:54:04.238661 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.238665 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-08 13:54:04.238669 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-08 13:54:04.238672 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.238676 | orchestrator | 2025-11-08 13:54:04.238680 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-11-08 13:54:04.238684 | orchestrator | Saturday 08 November 2025 13:49:53 +0000 (0:00:01.167) 0:06:49.577 ***** 2025-11-08 13:54:04.238687 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-08 13:54:04.238691 | orchestrator | 2025-11-08 13:54:04.238695 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-11-08 13:54:04.238699 | orchestrator | Saturday 08 November 2025 13:49:55 +0000 (0:00:02.053) 0:06:51.631 ***** 2025-11-08 13:54:04.238702 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.238706 | orchestrator | 2025-11-08 13:54:04.238710 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-11-08 13:54:04.238713 | orchestrator | Saturday 08 November 2025 13:49:55 +0000 (0:00:00.568) 0:06:52.200 ***** 2025-11-08 13:54:04.238717 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f393addc-5b9a-54bf-a4a6-7d44f9449202', 'data_vg': 'ceph-f393addc-5b9a-54bf-a4a6-7d44f9449202'}) 2025-11-08 13:54:04.238722 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cd56445f-4803-5564-bbe6-d923870c576d', 'data_vg': 'ceph-cd56445f-4803-5564-bbe6-d923870c576d'}) 2025-11-08 13:54:04.238726 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-56ba2a68-c761-5674-9bd2-a2481e6b0580', 'data_vg': 'ceph-56ba2a68-c761-5674-9bd2-a2481e6b0580'}) 2025-11-08 13:54:04.238732 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-380ddcdc-ed2e-5f5e-8a3f-001787d903df', 'data_vg': 'ceph-380ddcdc-ed2e-5f5e-8a3f-001787d903df'}) 2025-11-08 13:54:04.238739 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c507e483-80d4-5110-a9ba-f918053b344b', 'data_vg': 'ceph-c507e483-80d4-5110-a9ba-f918053b344b'}) 2025-11-08 13:54:04.238742 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b5af892c-b8e4-5298-acf4-1670635abe97', 'data_vg': 'ceph-b5af892c-b8e4-5298-acf4-1670635abe97'}) 2025-11-08 13:54:04.238746 | orchestrator | 2025-11-08 13:54:04.238750 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-11-08 13:54:04.238754 | orchestrator | Saturday 08 November 2025 13:50:40 +0000 (0:00:44.912) 0:07:37.112 ***** 2025-11-08 13:54:04.238757 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238761 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238765 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238769 | orchestrator | 2025-11-08 13:54:04.238772 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-11-08 13:54:04.238776 | orchestrator | Saturday 08 November 2025 13:50:41 +0000 (0:00:00.352) 0:07:37.465 ***** 2025-11-08 13:54:04.238780 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.238783 | orchestrator | 2025-11-08 13:54:04.238787 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-11-08 13:54:04.238791 | orchestrator | Saturday 08 November 2025 13:50:41 +0000 (0:00:00.509) 0:07:37.974 ***** 2025-11-08 13:54:04.238795 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.238798 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238802 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238806 | orchestrator | 2025-11-08 13:54:04.238809 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-11-08 13:54:04.238813 | orchestrator | Saturday 08 November 2025 13:50:42 +0000 (0:00:00.990) 0:07:38.964 ***** 2025-11-08 13:54:04.238817 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.238820 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.238824 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.238828 | orchestrator | 2025-11-08 13:54:04.238834 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-11-08 13:54:04.238838 | orchestrator | Saturday 08 November 2025 13:50:45 +0000 (0:00:02.384) 0:07:41.349 ***** 2025-11-08 13:54:04.238842 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.238846 | orchestrator | 2025-11-08 13:54:04.238849 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-11-08 13:54:04.238853 | orchestrator | Saturday 08 November 2025 13:50:45 +0000 (0:00:00.512) 0:07:41.862 ***** 2025-11-08 13:54:04.238857 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.238860 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.238864 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.238868 | orchestrator | 2025-11-08 13:54:04.238872 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-11-08 13:54:04.238875 | orchestrator | Saturday 08 November 2025 13:50:46 +0000 (0:00:01.377) 0:07:43.239 ***** 2025-11-08 13:54:04.238879 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.238883 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.238886 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.238890 | orchestrator | 2025-11-08 13:54:04.238894 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-11-08 13:54:04.238897 | orchestrator | Saturday 08 November 2025 13:50:47 +0000 (0:00:01.008) 0:07:44.248 ***** 2025-11-08 13:54:04.238901 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.238905 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.238908 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.238912 | orchestrator | 2025-11-08 13:54:04.238916 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-11-08 13:54:04.238920 | orchestrator | Saturday 08 November 2025 13:50:49 +0000 (0:00:01.660) 0:07:45.909 ***** 2025-11-08 13:54:04.238927 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238931 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238934 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238938 | orchestrator | 2025-11-08 13:54:04.238942 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-11-08 13:54:04.238946 | orchestrator | Saturday 08 November 2025 13:50:50 +0000 (0:00:00.439) 0:07:46.348 ***** 2025-11-08 13:54:04.238949 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.238953 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.238957 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.238960 | orchestrator | 2025-11-08 13:54:04.238964 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-11-08 13:54:04.238968 | orchestrator | Saturday 08 November 2025 13:50:50 +0000 (0:00:00.669) 0:07:47.017 ***** 2025-11-08 13:54:04.238972 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-11-08 13:54:04.238975 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-11-08 13:54:04.238979 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-11-08 13:54:04.238983 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-08 13:54:04.238986 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-11-08 13:54:04.238990 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-11-08 13:54:04.238994 | orchestrator | 2025-11-08 13:54:04.238997 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-11-08 13:54:04.239001 | orchestrator | Saturday 08 November 2025 13:50:51 +0000 (0:00:00.944) 0:07:47.961 ***** 2025-11-08 13:54:04.239005 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-11-08 13:54:04.239009 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-11-08 13:54:04.239013 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-11-08 13:54:04.239016 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-11-08 13:54:04.239020 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-11-08 13:54:04.239024 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-11-08 13:54:04.239027 | orchestrator | 2025-11-08 13:54:04.239033 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-11-08 13:54:04.239037 | orchestrator | Saturday 08 November 2025 13:50:53 +0000 (0:00:01.908) 0:07:49.870 ***** 2025-11-08 13:54:04.239041 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-11-08 13:54:04.239045 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-11-08 13:54:04.239048 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-11-08 13:54:04.239052 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-11-08 13:54:04.239056 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-11-08 13:54:04.239059 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-11-08 13:54:04.239063 | orchestrator | 2025-11-08 13:54:04.239067 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-11-08 13:54:04.239070 | orchestrator | Saturday 08 November 2025 13:50:57 +0000 (0:00:03.506) 0:07:53.376 ***** 2025-11-08 13:54:04.239074 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239078 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239081 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-08 13:54:04.239085 | orchestrator | 2025-11-08 13:54:04.239089 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-11-08 13:54:04.239093 | orchestrator | Saturday 08 November 2025 13:51:00 +0000 (0:00:03.439) 0:07:56.816 ***** 2025-11-08 13:54:04.239096 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239100 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239104 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-11-08 13:54:04.239107 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-08 13:54:04.239111 | orchestrator | 2025-11-08 13:54:04.239115 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-11-08 13:54:04.239119 | orchestrator | Saturday 08 November 2025 13:51:13 +0000 (0:00:12.537) 0:08:09.353 ***** 2025-11-08 13:54:04.239125 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239129 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239133 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239136 | orchestrator | 2025-11-08 13:54:04.239140 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-08 13:54:04.239147 | orchestrator | Saturday 08 November 2025 13:51:14 +0000 (0:00:01.054) 0:08:10.408 ***** 2025-11-08 13:54:04.239151 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239154 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239158 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239162 | orchestrator | 2025-11-08 13:54:04.239166 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-11-08 13:54:04.239169 | orchestrator | Saturday 08 November 2025 13:51:14 +0000 (0:00:00.344) 0:08:10.752 ***** 2025-11-08 13:54:04.239173 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.239177 | orchestrator | 2025-11-08 13:54:04.239180 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-11-08 13:54:04.239184 | orchestrator | Saturday 08 November 2025 13:51:14 +0000 (0:00:00.499) 0:08:11.252 ***** 2025-11-08 13:54:04.239188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.239192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.239195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.239199 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239203 | orchestrator | 2025-11-08 13:54:04.239207 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-11-08 13:54:04.239210 | orchestrator | Saturday 08 November 2025 13:51:15 +0000 (0:00:00.899) 0:08:12.151 ***** 2025-11-08 13:54:04.239214 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239218 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239221 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239225 | orchestrator | 2025-11-08 13:54:04.239229 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-11-08 13:54:04.239232 | orchestrator | Saturday 08 November 2025 13:51:16 +0000 (0:00:00.338) 0:08:12.489 ***** 2025-11-08 13:54:04.239236 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239240 | orchestrator | 2025-11-08 13:54:04.239244 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-11-08 13:54:04.239247 | orchestrator | Saturday 08 November 2025 13:51:16 +0000 (0:00:00.216) 0:08:12.706 ***** 2025-11-08 13:54:04.239251 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239255 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239259 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239262 | orchestrator | 2025-11-08 13:54:04.239266 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-11-08 13:54:04.239270 | orchestrator | Saturday 08 November 2025 13:51:16 +0000 (0:00:00.339) 0:08:13.045 ***** 2025-11-08 13:54:04.239273 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239277 | orchestrator | 2025-11-08 13:54:04.239281 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-11-08 13:54:04.239284 | orchestrator | Saturday 08 November 2025 13:51:16 +0000 (0:00:00.212) 0:08:13.258 ***** 2025-11-08 13:54:04.239288 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239292 | orchestrator | 2025-11-08 13:54:04.239296 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-11-08 13:54:04.239299 | orchestrator | Saturday 08 November 2025 13:51:17 +0000 (0:00:00.230) 0:08:13.489 ***** 2025-11-08 13:54:04.239303 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239307 | orchestrator | 2025-11-08 13:54:04.239311 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-11-08 13:54:04.239314 | orchestrator | Saturday 08 November 2025 13:51:17 +0000 (0:00:00.125) 0:08:13.615 ***** 2025-11-08 13:54:04.239321 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239325 | orchestrator | 2025-11-08 13:54:04.239328 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-11-08 13:54:04.239332 | orchestrator | Saturday 08 November 2025 13:51:17 +0000 (0:00:00.266) 0:08:13.881 ***** 2025-11-08 13:54:04.239360 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239365 | orchestrator | 2025-11-08 13:54:04.239369 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-11-08 13:54:04.239373 | orchestrator | Saturday 08 November 2025 13:51:18 +0000 (0:00:00.766) 0:08:14.647 ***** 2025-11-08 13:54:04.239376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.239380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.239384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.239388 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239391 | orchestrator | 2025-11-08 13:54:04.239395 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-11-08 13:54:04.239399 | orchestrator | Saturday 08 November 2025 13:51:18 +0000 (0:00:00.435) 0:08:15.082 ***** 2025-11-08 13:54:04.239402 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239406 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239410 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239413 | orchestrator | 2025-11-08 13:54:04.239417 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-11-08 13:54:04.239421 | orchestrator | Saturday 08 November 2025 13:51:19 +0000 (0:00:00.320) 0:08:15.403 ***** 2025-11-08 13:54:04.239425 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239428 | orchestrator | 2025-11-08 13:54:04.239432 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-11-08 13:54:04.239436 | orchestrator | Saturday 08 November 2025 13:51:19 +0000 (0:00:00.236) 0:08:15.640 ***** 2025-11-08 13:54:04.239439 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239443 | orchestrator | 2025-11-08 13:54:04.239447 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-11-08 13:54:04.239450 | orchestrator | 2025-11-08 13:54:04.239454 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-08 13:54:04.239458 | orchestrator | Saturday 08 November 2025 13:51:20 +0000 (0:00:00.905) 0:08:16.545 ***** 2025-11-08 13:54:04.239462 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.239466 | orchestrator | 2025-11-08 13:54:04.239472 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-08 13:54:04.239476 | orchestrator | Saturday 08 November 2025 13:51:21 +0000 (0:00:01.172) 0:08:17.717 ***** 2025-11-08 13:54:04.239480 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.239484 | orchestrator | 2025-11-08 13:54:04.239487 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-08 13:54:04.239491 | orchestrator | Saturday 08 November 2025 13:51:22 +0000 (0:00:00.982) 0:08:18.700 ***** 2025-11-08 13:54:04.239495 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239498 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239502 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239506 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.239510 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.239513 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.239517 | orchestrator | 2025-11-08 13:54:04.239521 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-08 13:54:04.239524 | orchestrator | Saturday 08 November 2025 13:51:23 +0000 (0:00:01.190) 0:08:19.890 ***** 2025-11-08 13:54:04.239528 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.239535 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.239539 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.239543 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.239546 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.239550 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.239554 | orchestrator | 2025-11-08 13:54:04.239557 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-08 13:54:04.239561 | orchestrator | Saturday 08 November 2025 13:51:24 +0000 (0:00:00.653) 0:08:20.544 ***** 2025-11-08 13:54:04.239565 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.239568 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.239572 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.239576 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.239580 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.239583 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.239587 | orchestrator | 2025-11-08 13:54:04.239591 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-08 13:54:04.239595 | orchestrator | Saturday 08 November 2025 13:51:25 +0000 (0:00:00.974) 0:08:21.519 ***** 2025-11-08 13:54:04.239598 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.239602 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.239606 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.239609 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.239613 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.239617 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.239620 | orchestrator | 2025-11-08 13:54:04.239624 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-08 13:54:04.239628 | orchestrator | Saturday 08 November 2025 13:51:25 +0000 (0:00:00.761) 0:08:22.280 ***** 2025-11-08 13:54:04.239631 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239635 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239639 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239643 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.239646 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.239650 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.239654 | orchestrator | 2025-11-08 13:54:04.239657 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-08 13:54:04.239661 | orchestrator | Saturday 08 November 2025 13:51:27 +0000 (0:00:01.357) 0:08:23.638 ***** 2025-11-08 13:54:04.239665 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239668 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239672 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239676 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.239680 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.239686 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.239689 | orchestrator | 2025-11-08 13:54:04.239693 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-08 13:54:04.239697 | orchestrator | Saturday 08 November 2025 13:51:27 +0000 (0:00:00.573) 0:08:24.212 ***** 2025-11-08 13:54:04.239701 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239704 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239708 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239712 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.239715 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.239719 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.239723 | orchestrator | 2025-11-08 13:54:04.239726 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-08 13:54:04.239730 | orchestrator | Saturday 08 November 2025 13:51:28 +0000 (0:00:00.802) 0:08:25.014 ***** 2025-11-08 13:54:04.239734 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.239738 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.239741 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.239745 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.239749 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.239755 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.239759 | orchestrator | 2025-11-08 13:54:04.239763 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-08 13:54:04.239766 | orchestrator | Saturday 08 November 2025 13:51:29 +0000 (0:00:00.994) 0:08:26.008 ***** 2025-11-08 13:54:04.239770 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.239774 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.239777 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.239781 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.239785 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.239788 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.239792 | orchestrator | 2025-11-08 13:54:04.239796 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-08 13:54:04.239799 | orchestrator | Saturday 08 November 2025 13:51:30 +0000 (0:00:01.288) 0:08:27.296 ***** 2025-11-08 13:54:04.239803 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239807 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239811 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239814 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.239818 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.239822 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.239825 | orchestrator | 2025-11-08 13:54:04.239832 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-08 13:54:04.239835 | orchestrator | Saturday 08 November 2025 13:51:31 +0000 (0:00:00.587) 0:08:27.884 ***** 2025-11-08 13:54:04.239839 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239843 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239846 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239850 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.239854 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.239858 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.239861 | orchestrator | 2025-11-08 13:54:04.239865 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-08 13:54:04.239869 | orchestrator | Saturday 08 November 2025 13:51:32 +0000 (0:00:00.885) 0:08:28.770 ***** 2025-11-08 13:54:04.239872 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.239876 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.239880 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.239883 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.239887 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.239891 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.239894 | orchestrator | 2025-11-08 13:54:04.239898 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-08 13:54:04.239902 | orchestrator | Saturday 08 November 2025 13:51:33 +0000 (0:00:00.623) 0:08:29.393 ***** 2025-11-08 13:54:04.239906 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.239909 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.239913 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.239917 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.239920 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.239924 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.239928 | orchestrator | 2025-11-08 13:54:04.239931 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-08 13:54:04.239935 | orchestrator | Saturday 08 November 2025 13:51:33 +0000 (0:00:00.800) 0:08:30.194 ***** 2025-11-08 13:54:04.239939 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.239942 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.239946 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.239950 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.239954 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.239957 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.239961 | orchestrator | 2025-11-08 13:54:04.239965 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-08 13:54:04.239968 | orchestrator | Saturday 08 November 2025 13:51:34 +0000 (0:00:00.605) 0:08:30.799 ***** 2025-11-08 13:54:04.239975 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.239979 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.239982 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.239986 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.239990 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.239993 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.239997 | orchestrator | 2025-11-08 13:54:04.240001 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-08 13:54:04.240004 | orchestrator | Saturday 08 November 2025 13:51:35 +0000 (0:00:00.882) 0:08:31.682 ***** 2025-11-08 13:54:04.240008 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240012 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240015 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240019 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:54:04.240023 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:54:04.240027 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:54:04.240030 | orchestrator | 2025-11-08 13:54:04.240034 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-08 13:54:04.240038 | orchestrator | Saturday 08 November 2025 13:51:35 +0000 (0:00:00.608) 0:08:32.290 ***** 2025-11-08 13:54:04.240041 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240045 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240049 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240052 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.240058 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.240062 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.240066 | orchestrator | 2025-11-08 13:54:04.240070 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-08 13:54:04.240073 | orchestrator | Saturday 08 November 2025 13:51:36 +0000 (0:00:00.938) 0:08:33.228 ***** 2025-11-08 13:54:04.240077 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240081 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240084 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240088 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.240092 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.240095 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.240099 | orchestrator | 2025-11-08 13:54:04.240103 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-08 13:54:04.240106 | orchestrator | Saturday 08 November 2025 13:51:37 +0000 (0:00:00.606) 0:08:33.835 ***** 2025-11-08 13:54:04.240110 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240114 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240117 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240121 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.240125 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.240128 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.240132 | orchestrator | 2025-11-08 13:54:04.240136 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-11-08 13:54:04.240140 | orchestrator | Saturday 08 November 2025 13:51:38 +0000 (0:00:01.196) 0:08:35.032 ***** 2025-11-08 13:54:04.240143 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-08 13:54:04.240147 | orchestrator | 2025-11-08 13:54:04.240151 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-11-08 13:54:04.240154 | orchestrator | Saturday 08 November 2025 13:51:42 +0000 (0:00:03.896) 0:08:38.928 ***** 2025-11-08 13:54:04.240158 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-08 13:54:04.240162 | orchestrator | 2025-11-08 13:54:04.240166 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-11-08 13:54:04.240169 | orchestrator | Saturday 08 November 2025 13:51:44 +0000 (0:00:01.973) 0:08:40.901 ***** 2025-11-08 13:54:04.240173 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.240177 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.240180 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.240187 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.240194 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.240198 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.240201 | orchestrator | 2025-11-08 13:54:04.240205 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-11-08 13:54:04.240209 | orchestrator | Saturday 08 November 2025 13:51:47 +0000 (0:00:02.457) 0:08:43.359 ***** 2025-11-08 13:54:04.240212 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.240216 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.240220 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.240223 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.240227 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.240231 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.240234 | orchestrator | 2025-11-08 13:54:04.240238 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-11-08 13:54:04.240242 | orchestrator | Saturday 08 November 2025 13:51:48 +0000 (0:00:01.073) 0:08:44.433 ***** 2025-11-08 13:54:04.240246 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.240250 | orchestrator | 2025-11-08 13:54:04.240254 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-11-08 13:54:04.240258 | orchestrator | Saturday 08 November 2025 13:51:49 +0000 (0:00:01.321) 0:08:45.754 ***** 2025-11-08 13:54:04.240261 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.240265 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.240269 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.240272 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.240276 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.240280 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.240283 | orchestrator | 2025-11-08 13:54:04.240287 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-11-08 13:54:04.240291 | orchestrator | Saturday 08 November 2025 13:51:51 +0000 (0:00:01.746) 0:08:47.501 ***** 2025-11-08 13:54:04.240294 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.240298 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.240302 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.240305 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.240309 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.240313 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.240316 | orchestrator | 2025-11-08 13:54:04.240320 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-11-08 13:54:04.240324 | orchestrator | Saturday 08 November 2025 13:51:54 +0000 (0:00:03.257) 0:08:50.758 ***** 2025-11-08 13:54:04.240328 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:54:04.240331 | orchestrator | 2025-11-08 13:54:04.240335 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-11-08 13:54:04.240347 | orchestrator | Saturday 08 November 2025 13:51:55 +0000 (0:00:01.397) 0:08:52.156 ***** 2025-11-08 13:54:04.240351 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240355 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240359 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240362 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.240366 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.240370 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.240373 | orchestrator | 2025-11-08 13:54:04.240377 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-11-08 13:54:04.240381 | orchestrator | Saturday 08 November 2025 13:51:56 +0000 (0:00:00.882) 0:08:53.039 ***** 2025-11-08 13:54:04.240385 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.240391 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.240397 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:54:04.240409 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.240418 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:54:04.240423 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:54:04.240428 | orchestrator | 2025-11-08 13:54:04.240434 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-11-08 13:54:04.240441 | orchestrator | Saturday 08 November 2025 13:51:58 +0000 (0:00:02.241) 0:08:55.280 ***** 2025-11-08 13:54:04.240447 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240453 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240459 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240465 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:54:04.240468 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:54:04.240472 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:54:04.240476 | orchestrator | 2025-11-08 13:54:04.240479 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-11-08 13:54:04.240483 | orchestrator | 2025-11-08 13:54:04.240487 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-08 13:54:04.240491 | orchestrator | Saturday 08 November 2025 13:52:00 +0000 (0:00:01.113) 0:08:56.393 ***** 2025-11-08 13:54:04.240494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.240498 | orchestrator | 2025-11-08 13:54:04.240502 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-08 13:54:04.240506 | orchestrator | Saturday 08 November 2025 13:52:00 +0000 (0:00:00.551) 0:08:56.945 ***** 2025-11-08 13:54:04.240509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.240513 | orchestrator | 2025-11-08 13:54:04.240517 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-08 13:54:04.240520 | orchestrator | Saturday 08 November 2025 13:52:01 +0000 (0:00:00.769) 0:08:57.714 ***** 2025-11-08 13:54:04.240524 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240528 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240532 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240535 | orchestrator | 2025-11-08 13:54:04.240539 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-08 13:54:04.240543 | orchestrator | Saturday 08 November 2025 13:52:01 +0000 (0:00:00.317) 0:08:58.031 ***** 2025-11-08 13:54:04.240549 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240553 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240557 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240560 | orchestrator | 2025-11-08 13:54:04.240564 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-08 13:54:04.240568 | orchestrator | Saturday 08 November 2025 13:52:02 +0000 (0:00:00.712) 0:08:58.744 ***** 2025-11-08 13:54:04.240571 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240575 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240579 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240582 | orchestrator | 2025-11-08 13:54:04.240586 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-08 13:54:04.240590 | orchestrator | Saturday 08 November 2025 13:52:03 +0000 (0:00:00.994) 0:08:59.739 ***** 2025-11-08 13:54:04.240594 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240597 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240601 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240605 | orchestrator | 2025-11-08 13:54:04.240608 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-08 13:54:04.240612 | orchestrator | Saturday 08 November 2025 13:52:04 +0000 (0:00:00.789) 0:09:00.528 ***** 2025-11-08 13:54:04.240616 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240620 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240623 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240627 | orchestrator | 2025-11-08 13:54:04.240631 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-08 13:54:04.240639 | orchestrator | Saturday 08 November 2025 13:52:04 +0000 (0:00:00.332) 0:09:00.861 ***** 2025-11-08 13:54:04.240642 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240646 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240650 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240653 | orchestrator | 2025-11-08 13:54:04.240657 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-08 13:54:04.240661 | orchestrator | Saturday 08 November 2025 13:52:04 +0000 (0:00:00.291) 0:09:01.152 ***** 2025-11-08 13:54:04.240664 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240668 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240672 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240676 | orchestrator | 2025-11-08 13:54:04.240679 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-08 13:54:04.240683 | orchestrator | Saturday 08 November 2025 13:52:05 +0000 (0:00:00.593) 0:09:01.746 ***** 2025-11-08 13:54:04.240687 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240690 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240694 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240698 | orchestrator | 2025-11-08 13:54:04.240702 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-08 13:54:04.240705 | orchestrator | Saturday 08 November 2025 13:52:06 +0000 (0:00:00.723) 0:09:02.469 ***** 2025-11-08 13:54:04.240709 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240713 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240716 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240720 | orchestrator | 2025-11-08 13:54:04.240724 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-08 13:54:04.240727 | orchestrator | Saturday 08 November 2025 13:52:06 +0000 (0:00:00.761) 0:09:03.231 ***** 2025-11-08 13:54:04.240731 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240735 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240738 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240742 | orchestrator | 2025-11-08 13:54:04.240746 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-08 13:54:04.240750 | orchestrator | Saturday 08 November 2025 13:52:07 +0000 (0:00:00.345) 0:09:03.577 ***** 2025-11-08 13:54:04.240753 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240757 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240761 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240765 | orchestrator | 2025-11-08 13:54:04.240771 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-08 13:54:04.240775 | orchestrator | Saturday 08 November 2025 13:52:07 +0000 (0:00:00.587) 0:09:04.164 ***** 2025-11-08 13:54:04.240779 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240782 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240786 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240790 | orchestrator | 2025-11-08 13:54:04.240793 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-08 13:54:04.240797 | orchestrator | Saturday 08 November 2025 13:52:08 +0000 (0:00:00.333) 0:09:04.498 ***** 2025-11-08 13:54:04.240801 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240804 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240808 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240812 | orchestrator | 2025-11-08 13:54:04.240816 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-08 13:54:04.240819 | orchestrator | Saturday 08 November 2025 13:52:08 +0000 (0:00:00.383) 0:09:04.881 ***** 2025-11-08 13:54:04.240823 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240827 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240830 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240834 | orchestrator | 2025-11-08 13:54:04.240838 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-08 13:54:04.240842 | orchestrator | Saturday 08 November 2025 13:52:08 +0000 (0:00:00.322) 0:09:05.204 ***** 2025-11-08 13:54:04.240849 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240853 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240856 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240860 | orchestrator | 2025-11-08 13:54:04.240864 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-08 13:54:04.240867 | orchestrator | Saturday 08 November 2025 13:52:09 +0000 (0:00:00.636) 0:09:05.841 ***** 2025-11-08 13:54:04.240871 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240875 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240878 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240882 | orchestrator | 2025-11-08 13:54:04.240886 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-08 13:54:04.240890 | orchestrator | Saturday 08 November 2025 13:52:09 +0000 (0:00:00.315) 0:09:06.156 ***** 2025-11-08 13:54:04.240893 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.240897 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240903 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240907 | orchestrator | 2025-11-08 13:54:04.240911 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-08 13:54:04.240915 | orchestrator | Saturday 08 November 2025 13:52:10 +0000 (0:00:00.308) 0:09:06.465 ***** 2025-11-08 13:54:04.240918 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240922 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240926 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240929 | orchestrator | 2025-11-08 13:54:04.240933 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-08 13:54:04.240937 | orchestrator | Saturday 08 November 2025 13:52:10 +0000 (0:00:00.365) 0:09:06.830 ***** 2025-11-08 13:54:04.240941 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.240944 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.240948 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.240952 | orchestrator | 2025-11-08 13:54:04.240955 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-11-08 13:54:04.240959 | orchestrator | Saturday 08 November 2025 13:52:11 +0000 (0:00:00.823) 0:09:07.654 ***** 2025-11-08 13:54:04.240963 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.240967 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.240971 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-11-08 13:54:04.240974 | orchestrator | 2025-11-08 13:54:04.240978 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-11-08 13:54:04.240982 | orchestrator | Saturday 08 November 2025 13:52:11 +0000 (0:00:00.399) 0:09:08.053 ***** 2025-11-08 13:54:04.240985 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-08 13:54:04.240989 | orchestrator | 2025-11-08 13:54:04.240993 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-11-08 13:54:04.240997 | orchestrator | Saturday 08 November 2025 13:52:13 +0000 (0:00:02.074) 0:09:10.128 ***** 2025-11-08 13:54:04.241002 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-11-08 13:54:04.241007 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.241011 | orchestrator | 2025-11-08 13:54:04.241015 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-11-08 13:54:04.241019 | orchestrator | Saturday 08 November 2025 13:52:14 +0000 (0:00:00.252) 0:09:10.380 ***** 2025-11-08 13:54:04.241023 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-08 13:54:04.241033 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-08 13:54:04.241040 | orchestrator | 2025-11-08 13:54:04.241044 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-11-08 13:54:04.241048 | orchestrator | Saturday 08 November 2025 13:52:22 +0000 (0:00:08.838) 0:09:19.218 ***** 2025-11-08 13:54:04.241052 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-08 13:54:04.241056 | orchestrator | 2025-11-08 13:54:04.241061 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-11-08 13:54:04.241065 | orchestrator | Saturday 08 November 2025 13:52:26 +0000 (0:00:03.672) 0:09:22.891 ***** 2025-11-08 13:54:04.241069 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.241073 | orchestrator | 2025-11-08 13:54:04.241076 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-11-08 13:54:04.241080 | orchestrator | Saturday 08 November 2025 13:52:27 +0000 (0:00:00.575) 0:09:23.467 ***** 2025-11-08 13:54:04.241084 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-08 13:54:04.241088 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-08 13:54:04.241091 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-08 13:54:04.241095 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-11-08 13:54:04.241099 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-11-08 13:54:04.241103 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-11-08 13:54:04.241106 | orchestrator | 2025-11-08 13:54:04.241110 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-11-08 13:54:04.241114 | orchestrator | Saturday 08 November 2025 13:52:28 +0000 (0:00:01.057) 0:09:24.524 ***** 2025-11-08 13:54:04.241117 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.241121 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-08 13:54:04.241125 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-08 13:54:04.241129 | orchestrator | 2025-11-08 13:54:04.241132 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-11-08 13:54:04.241136 | orchestrator | Saturday 08 November 2025 13:52:30 +0000 (0:00:02.404) 0:09:26.928 ***** 2025-11-08 13:54:04.241140 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-08 13:54:04.241143 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-08 13:54:04.241147 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.241154 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-08 13:54:04.241158 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-08 13:54:04.241161 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.241165 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-08 13:54:04.241169 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-08 13:54:04.241173 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.241176 | orchestrator | 2025-11-08 13:54:04.241180 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-11-08 13:54:04.241184 | orchestrator | Saturday 08 November 2025 13:52:32 +0000 (0:00:01.621) 0:09:28.550 ***** 2025-11-08 13:54:04.241188 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.241191 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.241195 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.241199 | orchestrator | 2025-11-08 13:54:04.241203 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-11-08 13:54:04.241206 | orchestrator | Saturday 08 November 2025 13:52:35 +0000 (0:00:02.813) 0:09:31.363 ***** 2025-11-08 13:54:04.241210 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.241214 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.241221 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.241225 | orchestrator | 2025-11-08 13:54:04.241228 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-11-08 13:54:04.241232 | orchestrator | Saturday 08 November 2025 13:52:35 +0000 (0:00:00.276) 0:09:31.640 ***** 2025-11-08 13:54:04.241236 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.241240 | orchestrator | 2025-11-08 13:54:04.241243 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-11-08 13:54:04.241247 | orchestrator | Saturday 08 November 2025 13:52:36 +0000 (0:00:00.877) 0:09:32.518 ***** 2025-11-08 13:54:04.241251 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.241254 | orchestrator | 2025-11-08 13:54:04.241258 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-11-08 13:54:04.241262 | orchestrator | Saturday 08 November 2025 13:52:36 +0000 (0:00:00.588) 0:09:33.106 ***** 2025-11-08 13:54:04.241265 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.241269 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.241273 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.241277 | orchestrator | 2025-11-08 13:54:04.241280 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-11-08 13:54:04.241284 | orchestrator | Saturday 08 November 2025 13:52:38 +0000 (0:00:01.486) 0:09:34.592 ***** 2025-11-08 13:54:04.241288 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.241291 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.241295 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.241299 | orchestrator | 2025-11-08 13:54:04.241302 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-11-08 13:54:04.241306 | orchestrator | Saturday 08 November 2025 13:52:39 +0000 (0:00:01.520) 0:09:36.113 ***** 2025-11-08 13:54:04.241310 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.241314 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.241317 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.241321 | orchestrator | 2025-11-08 13:54:04.241325 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-11-08 13:54:04.241328 | orchestrator | Saturday 08 November 2025 13:52:41 +0000 (0:00:01.907) 0:09:38.021 ***** 2025-11-08 13:54:04.241332 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.241336 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.241371 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.241375 | orchestrator | 2025-11-08 13:54:04.241381 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-11-08 13:54:04.241385 | orchestrator | Saturday 08 November 2025 13:52:43 +0000 (0:00:02.289) 0:09:40.310 ***** 2025-11-08 13:54:04.241389 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241393 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241397 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241400 | orchestrator | 2025-11-08 13:54:04.241404 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-08 13:54:04.241408 | orchestrator | Saturday 08 November 2025 13:52:45 +0000 (0:00:01.835) 0:09:42.145 ***** 2025-11-08 13:54:04.241412 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.241415 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.241419 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.241423 | orchestrator | 2025-11-08 13:54:04.241427 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-11-08 13:54:04.241430 | orchestrator | Saturday 08 November 2025 13:52:46 +0000 (0:00:00.730) 0:09:42.876 ***** 2025-11-08 13:54:04.241434 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.241438 | orchestrator | 2025-11-08 13:54:04.241441 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-11-08 13:54:04.241448 | orchestrator | Saturday 08 November 2025 13:52:47 +0000 (0:00:00.885) 0:09:43.761 ***** 2025-11-08 13:54:04.241452 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241456 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241460 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241463 | orchestrator | 2025-11-08 13:54:04.241467 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-11-08 13:54:04.241471 | orchestrator | Saturday 08 November 2025 13:52:47 +0000 (0:00:00.417) 0:09:44.179 ***** 2025-11-08 13:54:04.241475 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.241478 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.241482 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.241486 | orchestrator | 2025-11-08 13:54:04.241489 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-11-08 13:54:04.241493 | orchestrator | Saturday 08 November 2025 13:52:49 +0000 (0:00:01.291) 0:09:45.471 ***** 2025-11-08 13:54:04.241497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.241504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.241508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.241512 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.241515 | orchestrator | 2025-11-08 13:54:04.241519 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-11-08 13:54:04.241523 | orchestrator | Saturday 08 November 2025 13:52:50 +0000 (0:00:01.035) 0:09:46.506 ***** 2025-11-08 13:54:04.241527 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241530 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241534 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241538 | orchestrator | 2025-11-08 13:54:04.241542 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-11-08 13:54:04.241545 | orchestrator | 2025-11-08 13:54:04.241549 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-08 13:54:04.241553 | orchestrator | Saturday 08 November 2025 13:52:51 +0000 (0:00:00.954) 0:09:47.461 ***** 2025-11-08 13:54:04.241557 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.241560 | orchestrator | 2025-11-08 13:54:04.241564 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-08 13:54:04.241570 | orchestrator | Saturday 08 November 2025 13:52:51 +0000 (0:00:00.612) 0:09:48.073 ***** 2025-11-08 13:54:04.241576 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.241582 | orchestrator | 2025-11-08 13:54:04.241588 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-08 13:54:04.241593 | orchestrator | Saturday 08 November 2025 13:52:52 +0000 (0:00:00.755) 0:09:48.829 ***** 2025-11-08 13:54:04.241599 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.241605 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.241611 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.241616 | orchestrator | 2025-11-08 13:54:04.241622 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-08 13:54:04.241628 | orchestrator | Saturday 08 November 2025 13:52:52 +0000 (0:00:00.335) 0:09:49.165 ***** 2025-11-08 13:54:04.241633 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241639 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241645 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241651 | orchestrator | 2025-11-08 13:54:04.241657 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-08 13:54:04.241663 | orchestrator | Saturday 08 November 2025 13:52:53 +0000 (0:00:00.740) 0:09:49.905 ***** 2025-11-08 13:54:04.241669 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241676 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241680 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241684 | orchestrator | 2025-11-08 13:54:04.241694 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-08 13:54:04.241698 | orchestrator | Saturday 08 November 2025 13:52:54 +0000 (0:00:01.019) 0:09:50.924 ***** 2025-11-08 13:54:04.241701 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241705 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241708 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241712 | orchestrator | 2025-11-08 13:54:04.241716 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-08 13:54:04.241720 | orchestrator | Saturday 08 November 2025 13:52:55 +0000 (0:00:00.766) 0:09:51.691 ***** 2025-11-08 13:54:04.241723 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.241727 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.241731 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.241734 | orchestrator | 2025-11-08 13:54:04.241738 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-08 13:54:04.241745 | orchestrator | Saturday 08 November 2025 13:52:55 +0000 (0:00:00.375) 0:09:52.066 ***** 2025-11-08 13:54:04.241749 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.241752 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.241756 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.241760 | orchestrator | 2025-11-08 13:54:04.241764 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-08 13:54:04.241767 | orchestrator | Saturday 08 November 2025 13:52:56 +0000 (0:00:00.362) 0:09:52.429 ***** 2025-11-08 13:54:04.241771 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.241775 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.241778 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.241782 | orchestrator | 2025-11-08 13:54:04.241786 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-08 13:54:04.241790 | orchestrator | Saturday 08 November 2025 13:52:56 +0000 (0:00:00.618) 0:09:53.047 ***** 2025-11-08 13:54:04.241793 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241797 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241801 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241808 | orchestrator | 2025-11-08 13:54:04.241814 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-08 13:54:04.241819 | orchestrator | Saturday 08 November 2025 13:52:57 +0000 (0:00:00.715) 0:09:53.763 ***** 2025-11-08 13:54:04.241825 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241831 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241836 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241842 | orchestrator | 2025-11-08 13:54:04.241847 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-08 13:54:04.241853 | orchestrator | Saturday 08 November 2025 13:52:58 +0000 (0:00:00.727) 0:09:54.490 ***** 2025-11-08 13:54:04.241859 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.241866 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.241873 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.241876 | orchestrator | 2025-11-08 13:54:04.241880 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-08 13:54:04.241884 | orchestrator | Saturday 08 November 2025 13:52:58 +0000 (0:00:00.343) 0:09:54.834 ***** 2025-11-08 13:54:04.241888 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.241891 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.241895 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.241899 | orchestrator | 2025-11-08 13:54:04.241903 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-08 13:54:04.241909 | orchestrator | Saturday 08 November 2025 13:52:58 +0000 (0:00:00.306) 0:09:55.140 ***** 2025-11-08 13:54:04.241913 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241917 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241921 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241925 | orchestrator | 2025-11-08 13:54:04.241928 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-08 13:54:04.241937 | orchestrator | Saturday 08 November 2025 13:52:59 +0000 (0:00:00.638) 0:09:55.779 ***** 2025-11-08 13:54:04.241940 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241944 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241948 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241952 | orchestrator | 2025-11-08 13:54:04.241955 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-08 13:54:04.241959 | orchestrator | Saturday 08 November 2025 13:52:59 +0000 (0:00:00.369) 0:09:56.148 ***** 2025-11-08 13:54:04.241963 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.241966 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.241970 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.241974 | orchestrator | 2025-11-08 13:54:04.241978 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-08 13:54:04.241981 | orchestrator | Saturday 08 November 2025 13:53:00 +0000 (0:00:00.381) 0:09:56.530 ***** 2025-11-08 13:54:04.241985 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.241989 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.241992 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.241996 | orchestrator | 2025-11-08 13:54:04.242000 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-08 13:54:04.242004 | orchestrator | Saturday 08 November 2025 13:53:00 +0000 (0:00:00.343) 0:09:56.873 ***** 2025-11-08 13:54:04.242007 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.242011 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.242036 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.242040 | orchestrator | 2025-11-08 13:54:04.242044 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-08 13:54:04.242047 | orchestrator | Saturday 08 November 2025 13:53:01 +0000 (0:00:00.796) 0:09:57.670 ***** 2025-11-08 13:54:04.242051 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.242055 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.242059 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.242062 | orchestrator | 2025-11-08 13:54:04.242066 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-08 13:54:04.242070 | orchestrator | Saturday 08 November 2025 13:53:01 +0000 (0:00:00.374) 0:09:58.044 ***** 2025-11-08 13:54:04.242074 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.242077 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.242081 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.242085 | orchestrator | 2025-11-08 13:54:04.242089 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-08 13:54:04.242092 | orchestrator | Saturday 08 November 2025 13:53:02 +0000 (0:00:00.381) 0:09:58.426 ***** 2025-11-08 13:54:04.242096 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.242100 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.242104 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.242107 | orchestrator | 2025-11-08 13:54:04.242111 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-11-08 13:54:04.242115 | orchestrator | Saturday 08 November 2025 13:53:02 +0000 (0:00:00.844) 0:09:59.270 ***** 2025-11-08 13:54:04.242119 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.242122 | orchestrator | 2025-11-08 13:54:04.242126 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-11-08 13:54:04.242130 | orchestrator | Saturday 08 November 2025 13:53:03 +0000 (0:00:00.534) 0:09:59.805 ***** 2025-11-08 13:54:04.242137 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.242140 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-08 13:54:04.242144 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-08 13:54:04.242148 | orchestrator | 2025-11-08 13:54:04.242152 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-11-08 13:54:04.242156 | orchestrator | Saturday 08 November 2025 13:53:05 +0000 (0:00:02.130) 0:10:01.935 ***** 2025-11-08 13:54:04.242163 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-08 13:54:04.242167 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-08 13:54:04.242171 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.242175 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-08 13:54:04.242178 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-08 13:54:04.242182 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.242186 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-08 13:54:04.242190 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-08 13:54:04.242193 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.242197 | orchestrator | 2025-11-08 13:54:04.242201 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-11-08 13:54:04.242205 | orchestrator | Saturday 08 November 2025 13:53:07 +0000 (0:00:01.459) 0:10:03.395 ***** 2025-11-08 13:54:04.242208 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.242212 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.242216 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.242220 | orchestrator | 2025-11-08 13:54:04.242223 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-11-08 13:54:04.242227 | orchestrator | Saturday 08 November 2025 13:53:07 +0000 (0:00:00.337) 0:10:03.732 ***** 2025-11-08 13:54:04.242231 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.242235 | orchestrator | 2025-11-08 13:54:04.242239 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-11-08 13:54:04.242242 | orchestrator | Saturday 08 November 2025 13:53:07 +0000 (0:00:00.514) 0:10:04.247 ***** 2025-11-08 13:54:04.242249 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.242253 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.242257 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.242261 | orchestrator | 2025-11-08 13:54:04.242265 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-11-08 13:54:04.242268 | orchestrator | Saturday 08 November 2025 13:53:09 +0000 (0:00:01.221) 0:10:05.469 ***** 2025-11-08 13:54:04.242272 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.242276 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-08 13:54:04.242280 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.242284 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-08 13:54:04.242287 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.242291 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-08 13:54:04.242295 | orchestrator | 2025-11-08 13:54:04.242299 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-11-08 13:54:04.242303 | orchestrator | Saturday 08 November 2025 13:53:13 +0000 (0:00:04.287) 0:10:09.757 ***** 2025-11-08 13:54:04.242306 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.242310 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-08 13:54:04.242314 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.242323 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-08 13:54:04.242327 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:54:04.242330 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-08 13:54:04.242334 | orchestrator | 2025-11-08 13:54:04.242349 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-11-08 13:54:04.242354 | orchestrator | Saturday 08 November 2025 13:53:15 +0000 (0:00:02.345) 0:10:12.102 ***** 2025-11-08 13:54:04.242357 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-08 13:54:04.242361 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.242365 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-08 13:54:04.242369 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-08 13:54:04.242372 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.242376 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.242380 | orchestrator | 2025-11-08 13:54:04.242383 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-11-08 13:54:04.242387 | orchestrator | Saturday 08 November 2025 13:53:16 +0000 (0:00:01.177) 0:10:13.280 ***** 2025-11-08 13:54:04.242393 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-11-08 13:54:04.242397 | orchestrator | 2025-11-08 13:54:04.242401 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-11-08 13:54:04.242404 | orchestrator | Saturday 08 November 2025 13:53:17 +0000 (0:00:00.226) 0:10:13.506 ***** 2025-11-08 13:54:04.242408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-08 13:54:04.242412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-08 13:54:04.242416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-08 13:54:04.242420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-08 13:54:04.242424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-08 13:54:04.242427 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.242431 | orchestrator | 2025-11-08 13:54:04.242435 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-11-08 13:54:04.242439 | orchestrator | Saturday 08 November 2025 13:53:18 +0000 (0:00:01.052) 0:10:14.559 ***** 2025-11-08 13:54:04.242442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-08 13:54:04.242446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-08 13:54:04.242450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-08 13:54:04.242456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-08 13:54:04.242460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-08 13:54:04.242464 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.242467 | orchestrator | 2025-11-08 13:54:04.242471 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-11-08 13:54:04.242475 | orchestrator | Saturday 08 November 2025 13:53:18 +0000 (0:00:00.582) 0:10:15.141 ***** 2025-11-08 13:54:04.242479 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-08 13:54:04.242486 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-08 13:54:04.242490 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-08 13:54:04.242494 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-08 13:54:04.242497 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-08 13:54:04.242501 | orchestrator | 2025-11-08 13:54:04.242505 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-11-08 13:54:04.242508 | orchestrator | Saturday 08 November 2025 13:53:50 +0000 (0:00:31.581) 0:10:46.723 ***** 2025-11-08 13:54:04.242512 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.242516 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.242520 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.242523 | orchestrator | 2025-11-08 13:54:04.242527 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-11-08 13:54:04.242531 | orchestrator | Saturday 08 November 2025 13:53:50 +0000 (0:00:00.380) 0:10:47.103 ***** 2025-11-08 13:54:04.242534 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.242538 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.242542 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.242546 | orchestrator | 2025-11-08 13:54:04.242549 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-11-08 13:54:04.242553 | orchestrator | Saturday 08 November 2025 13:53:51 +0000 (0:00:00.332) 0:10:47.436 ***** 2025-11-08 13:54:04.242557 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.242561 | orchestrator | 2025-11-08 13:54:04.242564 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-11-08 13:54:04.242568 | orchestrator | Saturday 08 November 2025 13:53:51 +0000 (0:00:00.783) 0:10:48.219 ***** 2025-11-08 13:54:04.242572 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.242576 | orchestrator | 2025-11-08 13:54:04.242579 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-11-08 13:54:04.242583 | orchestrator | Saturday 08 November 2025 13:53:52 +0000 (0:00:00.519) 0:10:48.739 ***** 2025-11-08 13:54:04.242588 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.242592 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.242596 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.242600 | orchestrator | 2025-11-08 13:54:04.242604 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-11-08 13:54:04.242607 | orchestrator | Saturday 08 November 2025 13:53:53 +0000 (0:00:01.252) 0:10:49.991 ***** 2025-11-08 13:54:04.242611 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.242615 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.242619 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.242622 | orchestrator | 2025-11-08 13:54:04.242626 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-11-08 13:54:04.242630 | orchestrator | Saturday 08 November 2025 13:53:55 +0000 (0:00:01.448) 0:10:51.440 ***** 2025-11-08 13:54:04.242634 | orchestrator | changed: [testbed-node-3] 2025-11-08 13:54:04.242638 | orchestrator | changed: [testbed-node-4] 2025-11-08 13:54:04.242641 | orchestrator | changed: [testbed-node-5] 2025-11-08 13:54:04.242645 | orchestrator | 2025-11-08 13:54:04.242649 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-11-08 13:54:04.242652 | orchestrator | Saturday 08 November 2025 13:53:56 +0000 (0:00:01.838) 0:10:53.278 ***** 2025-11-08 13:54:04.242659 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.242663 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.242667 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-08 13:54:04.242670 | orchestrator | 2025-11-08 13:54:04.242674 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-08 13:54:04.242678 | orchestrator | Saturday 08 November 2025 13:54:00 +0000 (0:00:03.813) 0:10:57.092 ***** 2025-11-08 13:54:04.242682 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.242685 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.242689 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.242693 | orchestrator | 2025-11-08 13:54:04.242700 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-11-08 13:54:04.242704 | orchestrator | Saturday 08 November 2025 13:54:01 +0000 (0:00:00.359) 0:10:57.452 ***** 2025-11-08 13:54:04.242707 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:54:04.242711 | orchestrator | 2025-11-08 13:54:04.242715 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-11-08 13:54:04.242719 | orchestrator | Saturday 08 November 2025 13:54:01 +0000 (0:00:00.544) 0:10:57.996 ***** 2025-11-08 13:54:04.242722 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.242726 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.242730 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.242734 | orchestrator | 2025-11-08 13:54:04.242737 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-11-08 13:54:04.242741 | orchestrator | Saturday 08 November 2025 13:54:02 +0000 (0:00:00.605) 0:10:58.602 ***** 2025-11-08 13:54:04.242745 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.242749 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:54:04.242752 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:54:04.242756 | orchestrator | 2025-11-08 13:54:04.242760 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-11-08 13:54:04.242763 | orchestrator | Saturday 08 November 2025 13:54:02 +0000 (0:00:00.334) 0:10:58.936 ***** 2025-11-08 13:54:04.242767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:54:04.242771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:54:04.242775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:54:04.242778 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:54:04.242782 | orchestrator | 2025-11-08 13:54:04.242786 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-11-08 13:54:04.242790 | orchestrator | Saturday 08 November 2025 13:54:03 +0000 (0:00:00.594) 0:10:59.531 ***** 2025-11-08 13:54:04.242793 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:54:04.242797 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:54:04.242802 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:54:04.242808 | orchestrator | 2025-11-08 13:54:04.242814 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:54:04.242820 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-11-08 13:54:04.242827 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-11-08 13:54:04.242833 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-11-08 13:54:04.242838 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-11-08 13:54:04.242848 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-11-08 13:54:04.242853 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-11-08 13:54:04.242859 | orchestrator | 2025-11-08 13:54:04.242865 | orchestrator | 2025-11-08 13:54:04.242871 | orchestrator | 2025-11-08 13:54:04.242880 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:54:04.242886 | orchestrator | Saturday 08 November 2025 13:54:03 +0000 (0:00:00.241) 0:10:59.772 ***** 2025-11-08 13:54:04.242893 | orchestrator | =============================================================================== 2025-11-08 13:54:04.242898 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.39s 2025-11-08 13:54:04.242906 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.91s 2025-11-08 13:54:04.242910 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.58s 2025-11-08 13:54:04.242913 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.28s 2025-11-08 13:54:04.242917 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.32s 2025-11-08 13:54:04.242921 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.54s 2025-11-08 13:54:04.242925 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.37s 2025-11-08 13:54:04.242928 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.97s 2025-11-08 13:54:04.242932 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.84s 2025-11-08 13:54:04.242936 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.60s 2025-11-08 13:54:04.242940 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.44s 2025-11-08 13:54:04.242943 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.14s 2025-11-08 13:54:04.242947 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.29s 2025-11-08 13:54:04.242951 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.90s 2025-11-08 13:54:04.242955 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.88s 2025-11-08 13:54:04.242958 | orchestrator | ceph-rgw : Systemd start rgw container ---------------------------------- 3.81s 2025-11-08 13:54:04.242962 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.67s 2025-11-08 13:54:04.242969 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.51s 2025-11-08 13:54:04.242973 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.45s 2025-11-08 13:54:04.242976 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.44s 2025-11-08 13:54:04.242980 | orchestrator | 2025-11-08 13:54:04 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:04.242984 | orchestrator | 2025-11-08 13:54:04 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:07.293053 | orchestrator | 2025-11-08 13:54:07 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:07.297674 | orchestrator | 2025-11-08 13:54:07 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:07.299404 | orchestrator | 2025-11-08 13:54:07 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:07.299715 | orchestrator | 2025-11-08 13:54:07 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:10.364738 | orchestrator | 2025-11-08 13:54:10 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:10.367899 | orchestrator | 2025-11-08 13:54:10 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:10.370796 | orchestrator | 2025-11-08 13:54:10 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:10.370872 | orchestrator | 2025-11-08 13:54:10 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:13.418911 | orchestrator | 2025-11-08 13:54:13 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:13.420814 | orchestrator | 2025-11-08 13:54:13 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:13.422115 | orchestrator | 2025-11-08 13:54:13 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:13.422392 | orchestrator | 2025-11-08 13:54:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:16.476038 | orchestrator | 2025-11-08 13:54:16 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:16.477477 | orchestrator | 2025-11-08 13:54:16 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:16.479440 | orchestrator | 2025-11-08 13:54:16 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:16.479523 | orchestrator | 2025-11-08 13:54:16 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:19.523247 | orchestrator | 2025-11-08 13:54:19 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:19.523932 | orchestrator | 2025-11-08 13:54:19 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:19.525449 | orchestrator | 2025-11-08 13:54:19 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:19.525673 | orchestrator | 2025-11-08 13:54:19 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:22.563137 | orchestrator | 2025-11-08 13:54:22 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:22.564260 | orchestrator | 2025-11-08 13:54:22 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:22.566792 | orchestrator | 2025-11-08 13:54:22 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:22.566864 | orchestrator | 2025-11-08 13:54:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:25.608025 | orchestrator | 2025-11-08 13:54:25 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:25.608848 | orchestrator | 2025-11-08 13:54:25 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:25.610349 | orchestrator | 2025-11-08 13:54:25 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:25.610802 | orchestrator | 2025-11-08 13:54:25 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:28.658159 | orchestrator | 2025-11-08 13:54:28 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:28.659747 | orchestrator | 2025-11-08 13:54:28 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:28.662997 | orchestrator | 2025-11-08 13:54:28 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:28.663050 | orchestrator | 2025-11-08 13:54:28 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:31.713392 | orchestrator | 2025-11-08 13:54:31 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:31.716736 | orchestrator | 2025-11-08 13:54:31 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:31.722457 | orchestrator | 2025-11-08 13:54:31 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:31.722503 | orchestrator | 2025-11-08 13:54:31 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:34.773876 | orchestrator | 2025-11-08 13:54:34 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:34.776390 | orchestrator | 2025-11-08 13:54:34 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:34.779426 | orchestrator | 2025-11-08 13:54:34 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:34.779519 | orchestrator | 2025-11-08 13:54:34 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:37.830082 | orchestrator | 2025-11-08 13:54:37 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:37.832611 | orchestrator | 2025-11-08 13:54:37 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:37.840217 | orchestrator | 2025-11-08 13:54:37 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:37.840358 | orchestrator | 2025-11-08 13:54:37 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:40.879987 | orchestrator | 2025-11-08 13:54:40 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:40.881285 | orchestrator | 2025-11-08 13:54:40 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:40.882688 | orchestrator | 2025-11-08 13:54:40 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:40.883187 | orchestrator | 2025-11-08 13:54:40 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:43.934901 | orchestrator | 2025-11-08 13:54:43 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:43.935709 | orchestrator | 2025-11-08 13:54:43 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:43.937321 | orchestrator | 2025-11-08 13:54:43 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:43.937383 | orchestrator | 2025-11-08 13:54:43 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:46.984855 | orchestrator | 2025-11-08 13:54:46 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:46.986518 | orchestrator | 2025-11-08 13:54:46 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:46.989451 | orchestrator | 2025-11-08 13:54:46 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:46.989520 | orchestrator | 2025-11-08 13:54:46 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:50.029811 | orchestrator | 2025-11-08 13:54:50 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:50.031996 | orchestrator | 2025-11-08 13:54:50 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:50.034491 | orchestrator | 2025-11-08 13:54:50 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:50.034893 | orchestrator | 2025-11-08 13:54:50 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:53.075073 | orchestrator | 2025-11-08 13:54:53 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:53.076736 | orchestrator | 2025-11-08 13:54:53 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:53.079783 | orchestrator | 2025-11-08 13:54:53 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:53.079907 | orchestrator | 2025-11-08 13:54:53 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:56.122384 | orchestrator | 2025-11-08 13:54:56 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:56.124824 | orchestrator | 2025-11-08 13:54:56 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:56.126586 | orchestrator | 2025-11-08 13:54:56 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:56.126636 | orchestrator | 2025-11-08 13:54:56 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:54:59.169637 | orchestrator | 2025-11-08 13:54:59 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:54:59.170320 | orchestrator | 2025-11-08 13:54:59 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:54:59.172556 | orchestrator | 2025-11-08 13:54:59 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:54:59.172610 | orchestrator | 2025-11-08 13:54:59 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:02.216758 | orchestrator | 2025-11-08 13:55:02 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:55:02.218557 | orchestrator | 2025-11-08 13:55:02 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:02.220143 | orchestrator | 2025-11-08 13:55:02 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:55:02.220383 | orchestrator | 2025-11-08 13:55:02 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:05.268115 | orchestrator | 2025-11-08 13:55:05 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:55:05.268186 | orchestrator | 2025-11-08 13:55:05 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:05.268618 | orchestrator | 2025-11-08 13:55:05 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state STARTED 2025-11-08 13:55:05.268628 | orchestrator | 2025-11-08 13:55:05 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:08.317495 | orchestrator | 2025-11-08 13:55:08 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:55:08.320073 | orchestrator | 2025-11-08 13:55:08 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:08.324686 | orchestrator | 2025-11-08 13:55:08 | INFO  | Task 03836f68-8f22-4f41-9df5-0916ef48261b is in state SUCCESS 2025-11-08 13:55:08.324908 | orchestrator | 2025-11-08 13:55:08.326229 | orchestrator | 2025-11-08 13:55:08.326300 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-11-08 13:55:08.326312 | orchestrator | 2025-11-08 13:55:08.326321 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-11-08 13:55:08.326326 | orchestrator | Saturday 08 November 2025 13:52:06 +0000 (0:00:00.102) 0:00:00.102 ***** 2025-11-08 13:55:08.326331 | orchestrator | ok: [localhost] => { 2025-11-08 13:55:08.326337 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-11-08 13:55:08.326341 | orchestrator | } 2025-11-08 13:55:08.326346 | orchestrator | 2025-11-08 13:55:08.326350 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-11-08 13:55:08.326355 | orchestrator | Saturday 08 November 2025 13:52:06 +0000 (0:00:00.047) 0:00:00.150 ***** 2025-11-08 13:55:08.326359 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-11-08 13:55:08.326365 | orchestrator | ...ignoring 2025-11-08 13:55:08.326490 | orchestrator | 2025-11-08 13:55:08.326497 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-11-08 13:55:08.326500 | orchestrator | Saturday 08 November 2025 13:52:09 +0000 (0:00:02.928) 0:00:03.078 ***** 2025-11-08 13:55:08.326504 | orchestrator | skipping: [localhost] 2025-11-08 13:55:08.326508 | orchestrator | 2025-11-08 13:55:08.326512 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-11-08 13:55:08.326516 | orchestrator | Saturday 08 November 2025 13:52:09 +0000 (0:00:00.063) 0:00:03.142 ***** 2025-11-08 13:55:08.326520 | orchestrator | ok: [localhost] 2025-11-08 13:55:08.326524 | orchestrator | 2025-11-08 13:55:08.326528 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:55:08.326532 | orchestrator | 2025-11-08 13:55:08.326536 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:55:08.326539 | orchestrator | Saturday 08 November 2025 13:52:09 +0000 (0:00:00.198) 0:00:03.341 ***** 2025-11-08 13:55:08.326543 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.326547 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:55:08.326550 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:55:08.326554 | orchestrator | 2025-11-08 13:55:08.326558 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:55:08.326562 | orchestrator | Saturday 08 November 2025 13:52:09 +0000 (0:00:00.320) 0:00:03.661 ***** 2025-11-08 13:55:08.326565 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-11-08 13:55:08.326570 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-11-08 13:55:08.326574 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-11-08 13:55:08.326578 | orchestrator | 2025-11-08 13:55:08.326582 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-11-08 13:55:08.326586 | orchestrator | 2025-11-08 13:55:08.326589 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-11-08 13:55:08.326593 | orchestrator | Saturday 08 November 2025 13:52:10 +0000 (0:00:00.637) 0:00:04.298 ***** 2025-11-08 13:55:08.326597 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-08 13:55:08.326601 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-08 13:55:08.326605 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-08 13:55:08.326609 | orchestrator | 2025-11-08 13:55:08.326613 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-08 13:55:08.326617 | orchestrator | Saturday 08 November 2025 13:52:10 +0000 (0:00:00.415) 0:00:04.713 ***** 2025-11-08 13:55:08.326621 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:55:08.326627 | orchestrator | 2025-11-08 13:55:08.326630 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-11-08 13:55:08.326634 | orchestrator | Saturday 08 November 2025 13:52:11 +0000 (0:00:00.686) 0:00:05.400 ***** 2025-11-08 13:55:08.326653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-08 13:55:08.326705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-08 13:55:08.326715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-08 13:55:08.326724 | orchestrator | 2025-11-08 13:55:08.326735 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-11-08 13:55:08.326739 | orchestrator | Saturday 08 November 2025 13:52:14 +0000 (0:00:03.151) 0:00:08.552 ***** 2025-11-08 13:55:08.326743 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.326748 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.326751 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.326755 | orchestrator | 2025-11-08 13:55:08.326759 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-11-08 13:55:08.326763 | orchestrator | Saturday 08 November 2025 13:52:15 +0000 (0:00:00.680) 0:00:09.233 ***** 2025-11-08 13:55:08.326766 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.326770 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.326774 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.326778 | orchestrator | 2025-11-08 13:55:08.326781 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-11-08 13:55:08.326785 | orchestrator | Saturday 08 November 2025 13:52:16 +0000 (0:00:01.378) 0:00:10.611 ***** 2025-11-08 13:55:08.326793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-08 13:55:08.326801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-08 13:55:08.326810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-08 13:55:08.326814 | orchestrator | 2025-11-08 13:55:08.326818 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-11-08 13:55:08.326822 | orchestrator | Saturday 08 November 2025 13:52:20 +0000 (0:00:03.519) 0:00:14.131 ***** 2025-11-08 13:55:08.326826 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.326830 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.326833 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.326837 | orchestrator | 2025-11-08 13:55:08.326841 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-11-08 13:55:08.326848 | orchestrator | Saturday 08 November 2025 13:52:21 +0000 (0:00:01.058) 0:00:15.189 ***** 2025-11-08 13:55:08.326852 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.326856 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:55:08.326860 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:55:08.326863 | orchestrator | 2025-11-08 13:55:08.326867 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-08 13:55:08.326871 | orchestrator | Saturday 08 November 2025 13:52:25 +0000 (0:00:03.994) 0:00:19.184 ***** 2025-11-08 13:55:08.326878 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:55:08.326882 | orchestrator | 2025-11-08 13:55:08.326890 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-11-08 13:55:08.326893 | orchestrator | Saturday 08 November 2025 13:52:25 +0000 (0:00:00.531) 0:00:19.715 ***** 2025-11-08 13:55:08.326901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:55:08.326906 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.326913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:55:08.326921 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.326929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:55:08.326934 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.326937 | orchestrator | 2025-11-08 13:55:08.326941 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-11-08 13:55:08.326945 | orchestrator | Saturday 08 November 2025 13:52:29 +0000 (0:00:03.507) 0:00:23.223 ***** 2025-11-08 13:55:08.326949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:55:08.326959 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.326966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:55:08.326971 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.326975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:55:08.326979 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.326983 | orchestrator | 2025-11-08 13:55:08.326987 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-11-08 13:55:08.326991 | orchestrator | Saturday 08 November 2025 13:52:32 +0000 (0:00:03.217) 0:00:26.441 ***** 2025-11-08 13:55:08.327005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:55:08.327010 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:55:08.327023 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-08 13:55:08.327038 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.327042 | orchestrator | 2025-11-08 13:55:08.327046 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-11-08 13:55:08.327049 | orchestrator | Saturday 08 November 2025 13:52:35 +0000 (0:00:03.222) 0:00:29.663 ***** 2025-11-08 13:55:08.327058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-08 13:55:08.327066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-08 13:55:08.327078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-08 13:55:08.327082 | orchestrator | 2025-11-08 13:55:08.327087 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-11-08 13:55:08.327090 | orchestrator | Saturday 08 November 2025 13:52:40 +0000 (0:00:04.589) 0:00:34.252 ***** 2025-11-08 13:55:08.327094 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.327098 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:55:08.327102 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:55:08.327106 | orchestrator | 2025-11-08 13:55:08.327109 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-11-08 13:55:08.327113 | orchestrator | Saturday 08 November 2025 13:52:41 +0000 (0:00:01.028) 0:00:35.281 ***** 2025-11-08 13:55:08.327120 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.327124 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:55:08.327128 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:55:08.327132 | orchestrator | 2025-11-08 13:55:08.327135 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-11-08 13:55:08.327139 | orchestrator | Saturday 08 November 2025 13:52:41 +0000 (0:00:00.571) 0:00:35.852 ***** 2025-11-08 13:55:08.327144 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.327147 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:55:08.327151 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:55:08.327155 | orchestrator | 2025-11-08 13:55:08.327159 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-11-08 13:55:08.327162 | orchestrator | Saturday 08 November 2025 13:52:42 +0000 (0:00:00.391) 0:00:36.244 ***** 2025-11-08 13:55:08.327167 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-11-08 13:55:08.327172 | orchestrator | ...ignoring 2025-11-08 13:55:08.327175 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-11-08 13:55:08.327182 | orchestrator | ...ignoring 2025-11-08 13:55:08.327186 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-11-08 13:55:08.327190 | orchestrator | ...ignoring 2025-11-08 13:55:08.327194 | orchestrator | 2025-11-08 13:55:08.327197 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-11-08 13:55:08.327201 | orchestrator | Saturday 08 November 2025 13:52:53 +0000 (0:00:11.168) 0:00:47.412 ***** 2025-11-08 13:55:08.327205 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.327209 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:55:08.327213 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:55:08.327216 | orchestrator | 2025-11-08 13:55:08.327220 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-11-08 13:55:08.327224 | orchestrator | Saturday 08 November 2025 13:52:53 +0000 (0:00:00.410) 0:00:47.823 ***** 2025-11-08 13:55:08.327228 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.327232 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327236 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327239 | orchestrator | 2025-11-08 13:55:08.327258 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-11-08 13:55:08.327262 | orchestrator | Saturday 08 November 2025 13:52:54 +0000 (0:00:00.678) 0:00:48.502 ***** 2025-11-08 13:55:08.327266 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.327270 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327273 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327277 | orchestrator | 2025-11-08 13:55:08.327281 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-11-08 13:55:08.327285 | orchestrator | Saturday 08 November 2025 13:52:55 +0000 (0:00:00.470) 0:00:48.972 ***** 2025-11-08 13:55:08.327289 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.327293 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327296 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327300 | orchestrator | 2025-11-08 13:55:08.327304 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-11-08 13:55:08.327308 | orchestrator | Saturday 08 November 2025 13:52:55 +0000 (0:00:00.453) 0:00:49.426 ***** 2025-11-08 13:55:08.327311 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.327315 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:55:08.327319 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:55:08.327323 | orchestrator | 2025-11-08 13:55:08.327327 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-11-08 13:55:08.327331 | orchestrator | Saturday 08 November 2025 13:52:55 +0000 (0:00:00.378) 0:00:49.804 ***** 2025-11-08 13:55:08.327343 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.327347 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327351 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327355 | orchestrator | 2025-11-08 13:55:08.327359 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-08 13:55:08.327363 | orchestrator | Saturday 08 November 2025 13:52:56 +0000 (0:00:00.694) 0:00:50.499 ***** 2025-11-08 13:55:08.327366 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327370 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327374 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-11-08 13:55:08.327378 | orchestrator | 2025-11-08 13:55:08.327382 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-11-08 13:55:08.327386 | orchestrator | Saturday 08 November 2025 13:52:56 +0000 (0:00:00.392) 0:00:50.892 ***** 2025-11-08 13:55:08.327389 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.327393 | orchestrator | 2025-11-08 13:55:08.327397 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-11-08 13:55:08.327401 | orchestrator | Saturday 08 November 2025 13:53:06 +0000 (0:00:10.013) 0:01:00.906 ***** 2025-11-08 13:55:08.327404 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.327409 | orchestrator | 2025-11-08 13:55:08.327412 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-08 13:55:08.327416 | orchestrator | Saturday 08 November 2025 13:53:07 +0000 (0:00:00.145) 0:01:01.051 ***** 2025-11-08 13:55:08.327420 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.327424 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327427 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327431 | orchestrator | 2025-11-08 13:55:08.327435 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-11-08 13:55:08.327439 | orchestrator | Saturday 08 November 2025 13:53:08 +0000 (0:00:00.959) 0:01:02.011 ***** 2025-11-08 13:55:08.327443 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.327446 | orchestrator | 2025-11-08 13:55:08.327450 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-11-08 13:55:08.327454 | orchestrator | Saturday 08 November 2025 13:53:16 +0000 (0:00:08.030) 0:01:10.042 ***** 2025-11-08 13:55:08.327458 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.327462 | orchestrator | 2025-11-08 13:55:08.327465 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-11-08 13:55:08.327469 | orchestrator | Saturday 08 November 2025 13:53:17 +0000 (0:00:01.563) 0:01:11.605 ***** 2025-11-08 13:55:08.327473 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.327476 | orchestrator | 2025-11-08 13:55:08.327480 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-11-08 13:55:08.327484 | orchestrator | Saturday 08 November 2025 13:53:20 +0000 (0:00:02.452) 0:01:14.058 ***** 2025-11-08 13:55:08.327488 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.327491 | orchestrator | 2025-11-08 13:55:08.327495 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-11-08 13:55:08.327499 | orchestrator | Saturday 08 November 2025 13:53:20 +0000 (0:00:00.135) 0:01:14.193 ***** 2025-11-08 13:55:08.327503 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.327506 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327510 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327514 | orchestrator | 2025-11-08 13:55:08.327517 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-11-08 13:55:08.327521 | orchestrator | Saturday 08 November 2025 13:53:20 +0000 (0:00:00.308) 0:01:14.502 ***** 2025-11-08 13:55:08.327528 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.327532 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-11-08 13:55:08.327535 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:55:08.327539 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:55:08.327543 | orchestrator | 2025-11-08 13:55:08.327547 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-11-08 13:55:08.327554 | orchestrator | skipping: no hosts matched 2025-11-08 13:55:08.327558 | orchestrator | 2025-11-08 13:55:08.327562 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-08 13:55:08.327566 | orchestrator | 2025-11-08 13:55:08.327569 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-08 13:55:08.327573 | orchestrator | Saturday 08 November 2025 13:53:21 +0000 (0:00:00.562) 0:01:15.064 ***** 2025-11-08 13:55:08.327577 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:55:08.327581 | orchestrator | 2025-11-08 13:55:08.327585 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-08 13:55:08.327588 | orchestrator | Saturday 08 November 2025 13:53:43 +0000 (0:00:22.541) 0:01:37.605 ***** 2025-11-08 13:55:08.327592 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:55:08.327596 | orchestrator | 2025-11-08 13:55:08.327600 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-08 13:55:08.327603 | orchestrator | Saturday 08 November 2025 13:53:54 +0000 (0:00:10.654) 0:01:48.260 ***** 2025-11-08 13:55:08.327607 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:55:08.327611 | orchestrator | 2025-11-08 13:55:08.327615 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-08 13:55:08.327619 | orchestrator | 2025-11-08 13:55:08.327623 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-08 13:55:08.327626 | orchestrator | Saturday 08 November 2025 13:53:56 +0000 (0:00:02.369) 0:01:50.629 ***** 2025-11-08 13:55:08.327630 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:55:08.327634 | orchestrator | 2025-11-08 13:55:08.327638 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-08 13:55:08.327641 | orchestrator | Saturday 08 November 2025 13:54:20 +0000 (0:00:23.760) 0:02:14.390 ***** 2025-11-08 13:55:08.327645 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:55:08.327649 | orchestrator | 2025-11-08 13:55:08.327653 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-08 13:55:08.327657 | orchestrator | Saturday 08 November 2025 13:54:32 +0000 (0:00:11.597) 0:02:25.987 ***** 2025-11-08 13:55:08.327661 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:55:08.327664 | orchestrator | 2025-11-08 13:55:08.327668 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-11-08 13:55:08.327672 | orchestrator | 2025-11-08 13:55:08.327679 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-08 13:55:08.327683 | orchestrator | Saturday 08 November 2025 13:54:34 +0000 (0:00:02.358) 0:02:28.346 ***** 2025-11-08 13:55:08.327687 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.327691 | orchestrator | 2025-11-08 13:55:08.327695 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-08 13:55:08.327698 | orchestrator | Saturday 08 November 2025 13:54:51 +0000 (0:00:16.676) 0:02:45.023 ***** 2025-11-08 13:55:08.327703 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.327707 | orchestrator | 2025-11-08 13:55:08.327711 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-08 13:55:08.327714 | orchestrator | Saturday 08 November 2025 13:54:51 +0000 (0:00:00.538) 0:02:45.562 ***** 2025-11-08 13:55:08.327718 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.327723 | orchestrator | 2025-11-08 13:55:08.327726 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-11-08 13:55:08.327730 | orchestrator | 2025-11-08 13:55:08.327734 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-11-08 13:55:08.327738 | orchestrator | Saturday 08 November 2025 13:54:54 +0000 (0:00:02.665) 0:02:48.227 ***** 2025-11-08 13:55:08.327741 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:55:08.327745 | orchestrator | 2025-11-08 13:55:08.327749 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-11-08 13:55:08.327753 | orchestrator | Saturday 08 November 2025 13:54:54 +0000 (0:00:00.542) 0:02:48.770 ***** 2025-11-08 13:55:08.327762 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327766 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327770 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.327773 | orchestrator | 2025-11-08 13:55:08.327777 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-11-08 13:55:08.327781 | orchestrator | Saturday 08 November 2025 13:54:57 +0000 (0:00:02.370) 0:02:51.140 ***** 2025-11-08 13:55:08.327785 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327789 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327792 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.327796 | orchestrator | 2025-11-08 13:55:08.327801 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-11-08 13:55:08.327804 | orchestrator | Saturday 08 November 2025 13:54:59 +0000 (0:00:02.284) 0:02:53.425 ***** 2025-11-08 13:55:08.327808 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327812 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327816 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.327820 | orchestrator | 2025-11-08 13:55:08.327824 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-11-08 13:55:08.327827 | orchestrator | Saturday 08 November 2025 13:55:01 +0000 (0:00:02.428) 0:02:55.853 ***** 2025-11-08 13:55:08.327832 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327836 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327839 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:08.327843 | orchestrator | 2025-11-08 13:55:08.327847 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-11-08 13:55:08.327850 | orchestrator | Saturday 08 November 2025 13:55:04 +0000 (0:00:02.245) 0:02:58.098 ***** 2025-11-08 13:55:08.327854 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:08.327858 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:55:08.327863 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:55:08.327866 | orchestrator | 2025-11-08 13:55:08.327870 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-11-08 13:55:08.327877 | orchestrator | Saturday 08 November 2025 13:55:07 +0000 (0:00:03.026) 0:03:01.124 ***** 2025-11-08 13:55:08.327881 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:08.327885 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:08.327889 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:08.327893 | orchestrator | 2025-11-08 13:55:08.327896 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:55:08.327900 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-11-08 13:55:08.327905 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-11-08 13:55:08.327910 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-11-08 13:55:08.327914 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-11-08 13:55:08.327918 | orchestrator | 2025-11-08 13:55:08.327922 | orchestrator | 2025-11-08 13:55:08.327926 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:55:08.327929 | orchestrator | Saturday 08 November 2025 13:55:07 +0000 (0:00:00.247) 0:03:01.372 ***** 2025-11-08 13:55:08.327933 | orchestrator | =============================================================================== 2025-11-08 13:55:08.327938 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 46.30s 2025-11-08 13:55:08.327942 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 22.25s 2025-11-08 13:55:08.327946 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.68s 2025-11-08 13:55:08.327957 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.17s 2025-11-08 13:55:08.327961 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.01s 2025-11-08 13:55:08.327965 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.03s 2025-11-08 13:55:08.327972 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.73s 2025-11-08 13:55:08.327976 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.59s 2025-11-08 13:55:08.327980 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.99s 2025-11-08 13:55:08.327984 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.52s 2025-11-08 13:55:08.327988 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.51s 2025-11-08 13:55:08.327991 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.22s 2025-11-08 13:55:08.327995 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.22s 2025-11-08 13:55:08.327999 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.15s 2025-11-08 13:55:08.328003 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.03s 2025-11-08 13:55:08.328007 | orchestrator | Check MariaDB service --------------------------------------------------- 2.93s 2025-11-08 13:55:08.328010 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.67s 2025-11-08 13:55:08.328014 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.45s 2025-11-08 13:55:08.328018 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.43s 2025-11-08 13:55:08.328022 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.37s 2025-11-08 13:55:08.328026 | orchestrator | 2025-11-08 13:55:08 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:11.371985 | orchestrator | 2025-11-08 13:55:11 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state STARTED 2025-11-08 13:55:11.374921 | orchestrator | 2025-11-08 13:55:11 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:11.376999 | orchestrator | 2025-11-08 13:55:11 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:11.378725 | orchestrator | 2025-11-08 13:55:11 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:11.378889 | orchestrator | 2025-11-08 13:55:11 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:14.422955 | orchestrator | 2025-11-08 13:55:14.423389 | orchestrator | 2025-11-08 13:55:14 | INFO  | Task df1975d7-ec3c-4993-a36a-5c97e891420c is in state SUCCESS 2025-11-08 13:55:14.424369 | orchestrator | 2025-11-08 13:55:14.424449 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:55:14.424458 | orchestrator | 2025-11-08 13:55:14.424465 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:55:14.424471 | orchestrator | Saturday 08 November 2025 13:52:06 +0000 (0:00:00.256) 0:00:00.256 ***** 2025-11-08 13:55:14.424477 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:14.424484 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:55:14.424489 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:55:14.424495 | orchestrator | 2025-11-08 13:55:14.424501 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:55:14.424507 | orchestrator | Saturday 08 November 2025 13:52:06 +0000 (0:00:00.297) 0:00:00.554 ***** 2025-11-08 13:55:14.424526 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-11-08 13:55:14.424533 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-11-08 13:55:14.424538 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-11-08 13:55:14.424544 | orchestrator | 2025-11-08 13:55:14.424549 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-11-08 13:55:14.424572 | orchestrator | 2025-11-08 13:55:14.424578 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-08 13:55:14.424583 | orchestrator | Saturday 08 November 2025 13:52:07 +0000 (0:00:00.502) 0:00:01.057 ***** 2025-11-08 13:55:14.424589 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:55:14.424595 | orchestrator | 2025-11-08 13:55:14.424601 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-11-08 13:55:14.424606 | orchestrator | Saturday 08 November 2025 13:52:07 +0000 (0:00:00.523) 0:00:01.581 ***** 2025-11-08 13:55:14.424612 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-08 13:55:14.424618 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-08 13:55:14.424624 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-08 13:55:14.424629 | orchestrator | 2025-11-08 13:55:14.424635 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-11-08 13:55:14.424640 | orchestrator | Saturday 08 November 2025 13:52:08 +0000 (0:00:00.702) 0:00:02.283 ***** 2025-11-08 13:55:14.424649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.424659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.424678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.424691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.424703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.424709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.424715 | orchestrator | 2025-11-08 13:55:14.424721 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-08 13:55:14.424727 | orchestrator | Saturday 08 November 2025 13:52:10 +0000 (0:00:01.830) 0:00:04.113 ***** 2025-11-08 13:55:14.424732 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:55:14.424738 | orchestrator | 2025-11-08 13:55:14.424744 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-11-08 13:55:14.424749 | orchestrator | Saturday 08 November 2025 13:52:10 +0000 (0:00:00.579) 0:00:04.693 ***** 2025-11-08 13:55:14.424760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.424776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.424783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.424789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.424799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.424813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.424819 | orchestrator | 2025-11-08 13:55:14.424825 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-11-08 13:55:14.424830 | orchestrator | Saturday 08 November 2025 13:52:13 +0000 (0:00:02.691) 0:00:07.385 ***** 2025-11-08 13:55:14.424836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-08 13:55:14.424842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-08 13:55:14.424849 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:14.424855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-08 13:55:14.424938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-08 13:55:14.424950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-08 13:55:14.424958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-08 13:55:14.424964 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:14.424970 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:14.424977 | orchestrator | 2025-11-08 13:55:14.424982 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-11-08 13:55:14.424988 | orchestrator | Saturday 08 November 2025 13:52:14 +0000 (0:00:01.292) 0:00:08.677 ***** 2025-11-08 13:55:14.424995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-08 13:55:14.425018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-08 13:55:14.425025 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:14.425032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-08 13:55:14.425039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-08 13:55:14.425046 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:14.425052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-08 13:55:14.425072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-08 13:55:14.425079 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:14.425085 | orchestrator | 2025-11-08 13:55:14.425091 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-11-08 13:55:14.425097 | orchestrator | Saturday 08 November 2025 13:52:15 +0000 (0:00:00.959) 0:00:09.637 ***** 2025-11-08 13:55:14.425104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.425110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.425117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.425136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.425146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.425154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.425160 | orchestrator | 2025-11-08 13:55:14.425166 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-11-08 13:55:14.425172 | orchestrator | Saturday 08 November 2025 13:52:18 +0000 (0:00:02.339) 0:00:11.976 ***** 2025-11-08 13:55:14.425179 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:55:14.425185 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:55:14.425191 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:14.425202 | orchestrator | 2025-11-08 13:55:14.425208 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-11-08 13:55:14.425214 | orchestrator | Saturday 08 November 2025 13:52:20 +0000 (0:00:02.765) 0:00:14.742 ***** 2025-11-08 13:55:14.425220 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:14.425226 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:55:14.425232 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:55:14.425274 | orchestrator | 2025-11-08 13:55:14.425280 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-11-08 13:55:14.425286 | orchestrator | Saturday 08 November 2025 13:52:22 +0000 (0:00:02.022) 0:00:16.765 ***** 2025-11-08 13:55:14.425293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.425308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.425314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-08 13:55:14.425320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.425331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.425346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-08 13:55:14.425352 | orchestrator | 2025-11-08 13:55:14.425358 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-08 13:55:14.425363 | orchestrator | Saturday 08 November 2025 13:52:24 +0000 (0:00:02.167) 0:00:18.932 ***** 2025-11-08 13:55:14.425369 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:14.425374 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:55:14.425380 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:55:14.425385 | orchestrator | 2025-11-08 13:55:14.425391 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-08 13:55:14.425396 | orchestrator | Saturday 08 November 2025 13:52:25 +0000 (0:00:00.303) 0:00:19.236 ***** 2025-11-08 13:55:14.425402 | orchestrator | 2025-11-08 13:55:14.425407 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-08 13:55:14.425412 | orchestrator | Saturday 08 November 2025 13:52:25 +0000 (0:00:00.064) 0:00:19.300 ***** 2025-11-08 13:55:14.425418 | orchestrator | 2025-11-08 13:55:14.425423 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-08 13:55:14.425429 | orchestrator | Saturday 08 November 2025 13:52:25 +0000 (0:00:00.063) 0:00:19.363 ***** 2025-11-08 13:55:14.425434 | orchestrator | 2025-11-08 13:55:14.425439 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-11-08 13:55:14.425445 | orchestrator | Saturday 08 November 2025 13:52:25 +0000 (0:00:00.078) 0:00:19.442 ***** 2025-11-08 13:55:14.425450 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:14.425455 | orchestrator | 2025-11-08 13:55:14.425461 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-11-08 13:55:14.425471 | orchestrator | Saturday 08 November 2025 13:52:25 +0000 (0:00:00.209) 0:00:19.652 ***** 2025-11-08 13:55:14.425477 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:55:14.425482 | orchestrator | 2025-11-08 13:55:14.425487 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-11-08 13:55:14.425493 | orchestrator | Saturday 08 November 2025 13:52:26 +0000 (0:00:00.692) 0:00:20.344 ***** 2025-11-08 13:55:14.425498 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:14.425504 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:55:14.425509 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:55:14.425514 | orchestrator | 2025-11-08 13:55:14.425520 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-11-08 13:55:14.425525 | orchestrator | Saturday 08 November 2025 13:53:37 +0000 (0:01:10.932) 0:01:31.276 ***** 2025-11-08 13:55:14.425531 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:14.425536 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:55:14.425541 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:55:14.425547 | orchestrator | 2025-11-08 13:55:14.425552 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-08 13:55:14.425557 | orchestrator | Saturday 08 November 2025 13:55:03 +0000 (0:01:26.012) 0:02:57.288 ***** 2025-11-08 13:55:14.425563 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:55:14.425568 | orchestrator | 2025-11-08 13:55:14.425574 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-11-08 13:55:14.425579 | orchestrator | Saturday 08 November 2025 13:55:03 +0000 (0:00:00.650) 0:02:57.939 ***** 2025-11-08 13:55:14.425585 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:14.425590 | orchestrator | 2025-11-08 13:55:14.425595 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-11-08 13:55:14.425601 | orchestrator | Saturday 08 November 2025 13:55:06 +0000 (0:00:02.481) 0:03:00.421 ***** 2025-11-08 13:55:14.425606 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:55:14.425612 | orchestrator | 2025-11-08 13:55:14.425617 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-11-08 13:55:14.425622 | orchestrator | Saturday 08 November 2025 13:55:08 +0000 (0:00:02.197) 0:03:02.618 ***** 2025-11-08 13:55:14.425628 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:14.425633 | orchestrator | 2025-11-08 13:55:14.425638 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-11-08 13:55:14.425644 | orchestrator | Saturday 08 November 2025 13:55:11 +0000 (0:00:02.603) 0:03:05.222 ***** 2025-11-08 13:55:14.425649 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:55:14.425655 | orchestrator | 2025-11-08 13:55:14.425660 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:55:14.425666 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 13:55:14.425674 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-08 13:55:14.425679 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-08 13:55:14.425684 | orchestrator | 2025-11-08 13:55:14.425690 | orchestrator | 2025-11-08 13:55:14.425695 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:55:14.425705 | orchestrator | Saturday 08 November 2025 13:55:13 +0000 (0:00:02.614) 0:03:07.837 ***** 2025-11-08 13:55:14.425711 | orchestrator | =============================================================================== 2025-11-08 13:55:14.425716 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 86.01s 2025-11-08 13:55:14.425722 | orchestrator | opensearch : Restart opensearch container ------------------------------ 70.93s 2025-11-08 13:55:14.425727 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.77s 2025-11-08 13:55:14.425736 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.69s 2025-11-08 13:55:14.425746 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.61s 2025-11-08 13:55:14.425751 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.60s 2025-11-08 13:55:14.425757 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.48s 2025-11-08 13:55:14.425762 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.34s 2025-11-08 13:55:14.425767 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.20s 2025-11-08 13:55:14.425773 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.17s 2025-11-08 13:55:14.425778 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.02s 2025-11-08 13:55:14.425784 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.83s 2025-11-08 13:55:14.425789 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.29s 2025-11-08 13:55:14.425795 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.96s 2025-11-08 13:55:14.425800 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2025-11-08 13:55:14.425806 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.69s 2025-11-08 13:55:14.425811 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.65s 2025-11-08 13:55:14.425816 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2025-11-08 13:55:14.425822 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-11-08 13:55:14.425827 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-11-08 13:55:14.425832 | orchestrator | 2025-11-08 13:55:14 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:14.426588 | orchestrator | 2025-11-08 13:55:14 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:14.428182 | orchestrator | 2025-11-08 13:55:14 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:14.428226 | orchestrator | 2025-11-08 13:55:14 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:17.469795 | orchestrator | 2025-11-08 13:55:17 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:17.471991 | orchestrator | 2025-11-08 13:55:17 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:17.475429 | orchestrator | 2025-11-08 13:55:17 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:17.475471 | orchestrator | 2025-11-08 13:55:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:20.525705 | orchestrator | 2025-11-08 13:55:20 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:20.526328 | orchestrator | 2025-11-08 13:55:20 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:20.527180 | orchestrator | 2025-11-08 13:55:20 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:20.527207 | orchestrator | 2025-11-08 13:55:20 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:23.562200 | orchestrator | 2025-11-08 13:55:23 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:23.563595 | orchestrator | 2025-11-08 13:55:23 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:23.564601 | orchestrator | 2025-11-08 13:55:23 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:23.564665 | orchestrator | 2025-11-08 13:55:23 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:26.600780 | orchestrator | 2025-11-08 13:55:26 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:26.602420 | orchestrator | 2025-11-08 13:55:26 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:26.604567 | orchestrator | 2025-11-08 13:55:26 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:26.604601 | orchestrator | 2025-11-08 13:55:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:29.634945 | orchestrator | 2025-11-08 13:55:29 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:29.635259 | orchestrator | 2025-11-08 13:55:29 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:29.636043 | orchestrator | 2025-11-08 13:55:29 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:29.636079 | orchestrator | 2025-11-08 13:55:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:32.677366 | orchestrator | 2025-11-08 13:55:32 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:32.681138 | orchestrator | 2025-11-08 13:55:32 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:32.682887 | orchestrator | 2025-11-08 13:55:32 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:32.682959 | orchestrator | 2025-11-08 13:55:32 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:35.731540 | orchestrator | 2025-11-08 13:55:35 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:35.732814 | orchestrator | 2025-11-08 13:55:35 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:35.735536 | orchestrator | 2025-11-08 13:55:35 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:35.735552 | orchestrator | 2025-11-08 13:55:35 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:38.771095 | orchestrator | 2025-11-08 13:55:38 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:38.771269 | orchestrator | 2025-11-08 13:55:38 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:38.772522 | orchestrator | 2025-11-08 13:55:38 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:38.772537 | orchestrator | 2025-11-08 13:55:38 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:41.825977 | orchestrator | 2025-11-08 13:55:41 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:41.826583 | orchestrator | 2025-11-08 13:55:41 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:41.827661 | orchestrator | 2025-11-08 13:55:41 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:41.827707 | orchestrator | 2025-11-08 13:55:41 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:44.863785 | orchestrator | 2025-11-08 13:55:44 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:44.864834 | orchestrator | 2025-11-08 13:55:44 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:44.868263 | orchestrator | 2025-11-08 13:55:44 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:44.869595 | orchestrator | 2025-11-08 13:55:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:47.905791 | orchestrator | 2025-11-08 13:55:47 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:47.907231 | orchestrator | 2025-11-08 13:55:47 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:47.908907 | orchestrator | 2025-11-08 13:55:47 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:47.909048 | orchestrator | 2025-11-08 13:55:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:50.950763 | orchestrator | 2025-11-08 13:55:50 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:50.953428 | orchestrator | 2025-11-08 13:55:50 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:50.955547 | orchestrator | 2025-11-08 13:55:50 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:50.955657 | orchestrator | 2025-11-08 13:55:50 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:54.001067 | orchestrator | 2025-11-08 13:55:53 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:54.003681 | orchestrator | 2025-11-08 13:55:54 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:54.005640 | orchestrator | 2025-11-08 13:55:54 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:54.005688 | orchestrator | 2025-11-08 13:55:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:55:57.044371 | orchestrator | 2025-11-08 13:55:57 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:55:57.046150 | orchestrator | 2025-11-08 13:55:57 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:55:57.048500 | orchestrator | 2025-11-08 13:55:57 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:55:57.048523 | orchestrator | 2025-11-08 13:55:57 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:00.094466 | orchestrator | 2025-11-08 13:56:00 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:00.096871 | orchestrator | 2025-11-08 13:56:00 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:56:00.099082 | orchestrator | 2025-11-08 13:56:00 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:00.099118 | orchestrator | 2025-11-08 13:56:00 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:03.132092 | orchestrator | 2025-11-08 13:56:03 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:03.135033 | orchestrator | 2025-11-08 13:56:03 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:56:03.135844 | orchestrator | 2025-11-08 13:56:03 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:03.135871 | orchestrator | 2025-11-08 13:56:03 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:06.190980 | orchestrator | 2025-11-08 13:56:06 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:06.193221 | orchestrator | 2025-11-08 13:56:06 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:56:06.195098 | orchestrator | 2025-11-08 13:56:06 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:06.195137 | orchestrator | 2025-11-08 13:56:06 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:09.237504 | orchestrator | 2025-11-08 13:56:09 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:09.239108 | orchestrator | 2025-11-08 13:56:09 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:56:09.240937 | orchestrator | 2025-11-08 13:56:09 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:09.240970 | orchestrator | 2025-11-08 13:56:09 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:12.283102 | orchestrator | 2025-11-08 13:56:12 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:12.283885 | orchestrator | 2025-11-08 13:56:12 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:56:12.285789 | orchestrator | 2025-11-08 13:56:12 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:12.285839 | orchestrator | 2025-11-08 13:56:12 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:15.335112 | orchestrator | 2025-11-08 13:56:15 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:15.341195 | orchestrator | 2025-11-08 13:56:15 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:56:15.343534 | orchestrator | 2025-11-08 13:56:15 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:15.344027 | orchestrator | 2025-11-08 13:56:15 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:18.388979 | orchestrator | 2025-11-08 13:56:18 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:18.390938 | orchestrator | 2025-11-08 13:56:18 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state STARTED 2025-11-08 13:56:18.392795 | orchestrator | 2025-11-08 13:56:18 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:18.392839 | orchestrator | 2025-11-08 13:56:18 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:21.425679 | orchestrator | 2025-11-08 13:56:21 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:21.426106 | orchestrator | 2025-11-08 13:56:21 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:21.428399 | orchestrator | 2025-11-08 13:56:21 | INFO  | Task 9c22c8e5-59d9-4675-8c90-0d45b4ffcfb3 is in state SUCCESS 2025-11-08 13:56:21.429884 | orchestrator | 2025-11-08 13:56:21.429914 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-08 13:56:21.429920 | orchestrator | 2.16.14 2025-11-08 13:56:21.429925 | orchestrator | 2025-11-08 13:56:21.429929 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-11-08 13:56:21.429934 | orchestrator | 2025-11-08 13:56:21.429938 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-11-08 13:56:21.429943 | orchestrator | Saturday 08 November 2025 13:54:08 +0000 (0:00:00.574) 0:00:00.574 ***** 2025-11-08 13:56:21.429948 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:56:21.429953 | orchestrator | 2025-11-08 13:56:21.429957 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-11-08 13:56:21.429960 | orchestrator | Saturday 08 November 2025 13:54:09 +0000 (0:00:00.653) 0:00:01.227 ***** 2025-11-08 13:56:21.429977 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.429982 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.429985 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.429989 | orchestrator | 2025-11-08 13:56:21.429993 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-11-08 13:56:21.429997 | orchestrator | Saturday 08 November 2025 13:54:09 +0000 (0:00:00.682) 0:00:01.910 ***** 2025-11-08 13:56:21.430000 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.430046 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.430052 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.430056 | orchestrator | 2025-11-08 13:56:21.430059 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-11-08 13:56:21.430063 | orchestrator | Saturday 08 November 2025 13:54:10 +0000 (0:00:00.313) 0:00:02.223 ***** 2025-11-08 13:56:21.430067 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.430071 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.430075 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.430078 | orchestrator | 2025-11-08 13:56:21.430082 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-11-08 13:56:21.430086 | orchestrator | Saturday 08 November 2025 13:54:10 +0000 (0:00:00.948) 0:00:03.171 ***** 2025-11-08 13:56:21.430090 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.430094 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.430098 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.430101 | orchestrator | 2025-11-08 13:56:21.430105 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-11-08 13:56:21.430109 | orchestrator | Saturday 08 November 2025 13:54:11 +0000 (0:00:00.334) 0:00:03.506 ***** 2025-11-08 13:56:21.430113 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.430117 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.430178 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.430262 | orchestrator | 2025-11-08 13:56:21.430267 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-11-08 13:56:21.430271 | orchestrator | Saturday 08 November 2025 13:54:11 +0000 (0:00:00.337) 0:00:03.843 ***** 2025-11-08 13:56:21.430275 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.430279 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.430283 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.430287 | orchestrator | 2025-11-08 13:56:21.430290 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-11-08 13:56:21.430321 | orchestrator | Saturday 08 November 2025 13:54:12 +0000 (0:00:00.398) 0:00:04.241 ***** 2025-11-08 13:56:21.430325 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.430330 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.430334 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.430338 | orchestrator | 2025-11-08 13:56:21.430341 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-11-08 13:56:21.430345 | orchestrator | Saturday 08 November 2025 13:54:12 +0000 (0:00:00.630) 0:00:04.872 ***** 2025-11-08 13:56:21.430349 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.430353 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.430467 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.430470 | orchestrator | 2025-11-08 13:56:21.430474 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-11-08 13:56:21.430478 | orchestrator | Saturday 08 November 2025 13:54:13 +0000 (0:00:00.328) 0:00:05.200 ***** 2025-11-08 13:56:21.430482 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-08 13:56:21.430486 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-08 13:56:21.430490 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-08 13:56:21.430493 | orchestrator | 2025-11-08 13:56:21.430497 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-11-08 13:56:21.430501 | orchestrator | Saturday 08 November 2025 13:54:13 +0000 (0:00:00.639) 0:00:05.839 ***** 2025-11-08 13:56:21.430505 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.430508 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.430512 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.430516 | orchestrator | 2025-11-08 13:56:21.430520 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-11-08 13:56:21.430523 | orchestrator | Saturday 08 November 2025 13:54:14 +0000 (0:00:00.423) 0:00:06.263 ***** 2025-11-08 13:56:21.430527 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-08 13:56:21.430538 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-08 13:56:21.430542 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-08 13:56:21.430546 | orchestrator | 2025-11-08 13:56:21.430549 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-11-08 13:56:21.430553 | orchestrator | Saturday 08 November 2025 13:54:16 +0000 (0:00:02.202) 0:00:08.465 ***** 2025-11-08 13:56:21.430557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-08 13:56:21.430561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-08 13:56:21.430565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-08 13:56:21.430569 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.430573 | orchestrator | 2025-11-08 13:56:21.430584 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-11-08 13:56:21.430588 | orchestrator | Saturday 08 November 2025 13:54:16 +0000 (0:00:00.605) 0:00:09.071 ***** 2025-11-08 13:56:21.430594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.430600 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.430609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.430613 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.430617 | orchestrator | 2025-11-08 13:56:21.430621 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-11-08 13:56:21.430624 | orchestrator | Saturday 08 November 2025 13:54:17 +0000 (0:00:00.785) 0:00:09.856 ***** 2025-11-08 13:56:21.430630 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.430637 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.430640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.430644 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.430648 | orchestrator | 2025-11-08 13:56:21.430652 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-11-08 13:56:21.430656 | orchestrator | Saturday 08 November 2025 13:54:17 +0000 (0:00:00.315) 0:00:10.171 ***** 2025-11-08 13:56:21.430662 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0c4807ab1731', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-11-08 13:54:14.796853', 'end': '2025-11-08 13:54:14.847377', 'delta': '0:00:00.050524', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0c4807ab1731'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-11-08 13:56:21.430673 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '91989c7afd33', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-11-08 13:54:15.539957', 'end': '2025-11-08 13:54:15.575635', 'delta': '0:00:00.035678', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['91989c7afd33'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-11-08 13:56:21.430686 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bf21459e2a96', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-11-08 13:54:16.106950', 'end': '2025-11-08 13:54:16.147597', 'delta': '0:00:00.040647', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf21459e2a96'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-11-08 13:56:21.430714 | orchestrator | 2025-11-08 13:56:21.430718 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-11-08 13:56:21.430722 | orchestrator | Saturday 08 November 2025 13:54:18 +0000 (0:00:00.203) 0:00:10.375 ***** 2025-11-08 13:56:21.430805 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.430815 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.430959 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.430963 | orchestrator | 2025-11-08 13:56:21.430977 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-11-08 13:56:21.430982 | orchestrator | Saturday 08 November 2025 13:54:18 +0000 (0:00:00.487) 0:00:10.863 ***** 2025-11-08 13:56:21.430986 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-11-08 13:56:21.430990 | orchestrator | 2025-11-08 13:56:21.430994 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-11-08 13:56:21.430997 | orchestrator | Saturday 08 November 2025 13:54:20 +0000 (0:00:01.791) 0:00:12.654 ***** 2025-11-08 13:56:21.431001 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431005 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431009 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431013 | orchestrator | 2025-11-08 13:56:21.431017 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-11-08 13:56:21.431021 | orchestrator | Saturday 08 November 2025 13:54:20 +0000 (0:00:00.285) 0:00:12.940 ***** 2025-11-08 13:56:21.431024 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431028 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431032 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431036 | orchestrator | 2025-11-08 13:56:21.431039 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-08 13:56:21.431043 | orchestrator | Saturday 08 November 2025 13:54:21 +0000 (0:00:00.421) 0:00:13.362 ***** 2025-11-08 13:56:21.431053 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431057 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431060 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431064 | orchestrator | 2025-11-08 13:56:21.431068 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-11-08 13:56:21.431072 | orchestrator | Saturday 08 November 2025 13:54:21 +0000 (0:00:00.455) 0:00:13.817 ***** 2025-11-08 13:56:21.431075 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.431079 | orchestrator | 2025-11-08 13:56:21.431083 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-11-08 13:56:21.431087 | orchestrator | Saturday 08 November 2025 13:54:21 +0000 (0:00:00.138) 0:00:13.955 ***** 2025-11-08 13:56:21.431090 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431094 | orchestrator | 2025-11-08 13:56:21.431098 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-08 13:56:21.431102 | orchestrator | Saturday 08 November 2025 13:54:22 +0000 (0:00:00.236) 0:00:14.191 ***** 2025-11-08 13:56:21.431106 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431109 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431113 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431117 | orchestrator | 2025-11-08 13:56:21.431120 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-11-08 13:56:21.431124 | orchestrator | Saturday 08 November 2025 13:54:22 +0000 (0:00:00.293) 0:00:14.484 ***** 2025-11-08 13:56:21.431163 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431167 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431171 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431175 | orchestrator | 2025-11-08 13:56:21.431179 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-11-08 13:56:21.431182 | orchestrator | Saturday 08 November 2025 13:54:22 +0000 (0:00:00.365) 0:00:14.850 ***** 2025-11-08 13:56:21.431186 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431190 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431194 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431197 | orchestrator | 2025-11-08 13:56:21.431201 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-11-08 13:56:21.431205 | orchestrator | Saturday 08 November 2025 13:54:23 +0000 (0:00:00.496) 0:00:15.347 ***** 2025-11-08 13:56:21.431239 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431243 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431247 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431251 | orchestrator | 2025-11-08 13:56:21.431255 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-11-08 13:56:21.431259 | orchestrator | Saturday 08 November 2025 13:54:23 +0000 (0:00:00.319) 0:00:15.666 ***** 2025-11-08 13:56:21.431263 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431266 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431270 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431274 | orchestrator | 2025-11-08 13:56:21.431278 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-11-08 13:56:21.431282 | orchestrator | Saturday 08 November 2025 13:54:23 +0000 (0:00:00.308) 0:00:15.975 ***** 2025-11-08 13:56:21.431285 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431289 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431293 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431311 | orchestrator | 2025-11-08 13:56:21.431315 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-11-08 13:56:21.431319 | orchestrator | Saturday 08 November 2025 13:54:24 +0000 (0:00:00.309) 0:00:16.285 ***** 2025-11-08 13:56:21.431323 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431327 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431331 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431334 | orchestrator | 2025-11-08 13:56:21.431343 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-11-08 13:56:21.431346 | orchestrator | Saturday 08 November 2025 13:54:24 +0000 (0:00:00.468) 0:00:16.754 ***** 2025-11-08 13:56:21.431355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cd56445f--4803--5564--bbe6--d923870c576d-osd--block--cd56445f--4803--5564--bbe6--d923870c576d', 'dm-uuid-LVM-2aoSJq8qcletrfZW5Bfk49sieQy7Dha46abW5FdczHWOfObGe9YHIftk1ztHuXyc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c507e483--80d4--5110--a9ba--f918053b344b-osd--block--c507e483--80d4--5110--a9ba--f918053b344b', 'dm-uuid-LVM-IDxte1UGWzz3W0bynQvI1szgeLVOvCPVPk5ndyhCZvAs7TGhtenMfsBDN2xArfET'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cd56445f--4803--5564--bbe6--d923870c576d-osd--block--cd56445f--4803--5564--bbe6--d923870c576d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wP7Y30-oaef-Tz3m-ymot-UtJb-W1oc-7fXS08', 'scsi-0QEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20', 'scsi-SQEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c507e483--80d4--5110--a9ba--f918053b344b-osd--block--c507e483--80d4--5110--a9ba--f918053b344b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bc3e8j-yfsK-VMtb-Fnua-tbMC-u3Qa-X0FxLG', 'scsi-0QEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f', 'scsi-SQEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b', 'scsi-SQEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f393addc--5b9a--54bf--a4a6--7d44f9449202-osd--block--f393addc--5b9a--54bf--a4a6--7d44f9449202', 'dm-uuid-LVM-Psb3AsaEyaKNzCJXJeLeWO3LpbpcdM9ixBKgDMtINeobpsh63SGANUXOPb5q1Qgm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-12-59-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--380ddcdc--ed2e--5f5e--8a3f--001787d903df-osd--block--380ddcdc--ed2e--5f5e--8a3f--001787d903df', 'dm-uuid-LVM-XkmoqgmD3aUEVWZlaD5Lzze1pvY5tczAOFqk6zqgtCSDy8gwaoid5OsY42dcffXC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431538 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f393addc--5b9a--54bf--a4a6--7d44f9449202-osd--block--f393addc--5b9a--54bf--a4a6--7d44f9449202'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dFekCi-fde7-ud2U-Fmt7-Fp42-q7ek-vCoFvX', 'scsi-0QEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb', 'scsi-SQEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--380ddcdc--ed2e--5f5e--8a3f--001787d903df-osd--block--380ddcdc--ed2e--5f5e--8a3f--001787d903df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iYWukL-r4Eh-juxx-rEgA-KLFr-hV2P-fzIfJo', 'scsi-0QEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c', 'scsi-SQEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d', 'scsi-SQEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56ba2a68--c761--5674--9bd2--a2481e6b0580-osd--block--56ba2a68--c761--5674--9bd2--a2481e6b0580', 'dm-uuid-LVM-a02JLNVcMB1MMongJvoDhkkHadmwNkJLJ7TOO1SYtEG3RwKJnq6tfFrJWMWuJyDz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431602 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5af892c--b8e4--5298--acf4--1670635abe97-osd--block--b5af892c--b8e4--5298--acf4--1670635abe97', 'dm-uuid-LVM-CMLB1kfMUkDAmKaUYr9nLL1AtWJTZsRIFc3JrLIKvs6ht3G9mvyk6WvOaWdhdWof'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-08 13:56:21.431659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part1', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part14', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part15', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part16', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--56ba2a68--c761--5674--9bd2--a2481e6b0580-osd--block--56ba2a68--c761--5674--9bd2--a2481e6b0580'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tLjwv0-Oeut-hwgd-noei-DeUf-v6Mm-dsBb3I', 'scsi-0QEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff', 'scsi-SQEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b5af892c--b8e4--5298--acf4--1670635abe97-osd--block--b5af892c--b8e4--5298--acf4--1670635abe97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vi1Q5v-sZk0-8B4D-Vvxf-s8oz-czzq-liaWuw', 'scsi-0QEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36', 'scsi-SQEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995', 'scsi-SQEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-08 13:56:21.431687 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431690 | orchestrator | 2025-11-08 13:56:21.431694 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-11-08 13:56:21.431698 | orchestrator | Saturday 08 November 2025 13:54:25 +0000 (0:00:00.525) 0:00:17.279 ***** 2025-11-08 13:56:21.431705 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cd56445f--4803--5564--bbe6--d923870c576d-osd--block--cd56445f--4803--5564--bbe6--d923870c576d', 'dm-uuid-LVM-2aoSJq8qcletrfZW5Bfk49sieQy7Dha46abW5FdczHWOfObGe9YHIftk1ztHuXyc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431710 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c507e483--80d4--5110--a9ba--f918053b344b-osd--block--c507e483--80d4--5110--a9ba--f918053b344b', 'dm-uuid-LVM-IDxte1UGWzz3W0bynQvI1szgeLVOvCPVPk5ndyhCZvAs7TGhtenMfsBDN2xArfET'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431714 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431725 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431746 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431750 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f393addc--5b9a--54bf--a4a6--7d44f9449202-osd--block--f393addc--5b9a--54bf--a4a6--7d44f9449202', 'dm-uuid-LVM-Psb3AsaEyaKNzCJXJeLeWO3LpbpcdM9ixBKgDMtINeobpsh63SGANUXOPb5q1Qgm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431761 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--380ddcdc--ed2e--5f5e--8a3f--001787d903df-osd--block--380ddcdc--ed2e--5f5e--8a3f--001787d903df', 'dm-uuid-LVM-XkmoqgmD3aUEVWZlaD5Lzze1pvY5tczAOFqk6zqgtCSDy8gwaoid5OsY42dcffXC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431774 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431778 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7bcb89ad-f0c3-4ca7-8180-786cf7e929b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cd56445f--4803--5564--bbe6--d923870c576d-osd--block--cd56445f--4803--5564--bbe6--d923870c576d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wP7Y30-oaef-Tz3m-ymot-UtJb-W1oc-7fXS08', 'scsi-0QEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20', 'scsi-SQEMU_QEMU_HARDDISK_ce3e3473-55e8-454e-8a0a-ac291b184d20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431801 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431805 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c507e483--80d4--5110--a9ba--f918053b344b-osd--block--c507e483--80d4--5110--a9ba--f918053b344b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bc3e8j-yfsK-VMtb-Fnua-tbMC-u3Qa-X0FxLG', 'scsi-0QEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f', 'scsi-SQEMU_QEMU_HARDDISK_3757d830-b0af-49e2-85a4-9877085f3a2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431809 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431816 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b', 'scsi-SQEMU_QEMU_HARDDISK_e000e6ad-d7f7-4db6-bbc8-734d25f4dc3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431822 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-12-59-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431843 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.431847 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1fc42fd-5332-49a1-9701-fd67e0fd5d8d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431862 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f393addc--5b9a--54bf--a4a6--7d44f9449202-osd--block--f393addc--5b9a--54bf--a4a6--7d44f9449202'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dFekCi-fde7-ud2U-Fmt7-Fp42-q7ek-vCoFvX', 'scsi-0QEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb', 'scsi-SQEMU_QEMU_HARDDISK_92c2e246-dc93-49f1-98da-a6574bccf4cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431867 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--380ddcdc--ed2e--5f5e--8a3f--001787d903df-osd--block--380ddcdc--ed2e--5f5e--8a3f--001787d903df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iYWukL-r4Eh-juxx-rEgA-KLFr-hV2P-fzIfJo', 'scsi-0QEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c', 'scsi-SQEMU_QEMU_HARDDISK_dc29408d-4f3e-478d-82da-c226aaca029c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431874 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d', 'scsi-SQEMU_QEMU_HARDDISK_a45a4cf7-d855-4857-b9ae-b573b3c7176d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431882 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56ba2a68--c761--5674--9bd2--a2481e6b0580-osd--block--56ba2a68--c761--5674--9bd2--a2481e6b0580', 'dm-uuid-LVM-a02JLNVcMB1MMongJvoDhkkHadmwNkJLJ7TOO1SYtEG3RwKJnq6tfFrJWMWuJyDz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431892 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5af892c--b8e4--5298--acf4--1670635abe97-osd--block--b5af892c--b8e4--5298--acf4--1670635abe97', 'dm-uuid-LVM-CMLB1kfMUkDAmKaUYr9nLL1AtWJTZsRIFc3JrLIKvs6ht3G9mvyk6WvOaWdhdWof'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431899 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.431903 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431907 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431911 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431919 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431925 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431933 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part1', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part14', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part15', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part16', 'scsi-SQEMU_QEMU_HARDDISK_1ae6c1c2-f8e2-4ff0-a47b-1e2cecc81165-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431954 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--56ba2a68--c761--5674--9bd2--a2481e6b0580-osd--block--56ba2a68--c761--5674--9bd2--a2481e6b0580'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tLjwv0-Oeut-hwgd-noei-DeUf-v6Mm-dsBb3I', 'scsi-0QEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff', 'scsi-SQEMU_QEMU_HARDDISK_4485c49e-1f3e-4177-b8cf-e377966726ff'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b5af892c--b8e4--5298--acf4--1670635abe97-osd--block--b5af892c--b8e4--5298--acf4--1670635abe97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vi1Q5v-sZk0-8B4D-Vvxf-s8oz-czzq-liaWuw', 'scsi-0QEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36', 'scsi-SQEMU_QEMU_HARDDISK_f84a4500-4dd6-44ad-a9ff-274f9f36fc36'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995', 'scsi-SQEMU_QEMU_HARDDISK_c4ff64d0-4838-4e36-9da9-d01e7c6d3995'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-08-13-00-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-08 13:56:21.431977 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.431981 | orchestrator | 2025-11-08 13:56:21.431985 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-11-08 13:56:21.431988 | orchestrator | Saturday 08 November 2025 13:54:25 +0000 (0:00:00.534) 0:00:17.813 ***** 2025-11-08 13:56:21.431992 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.431997 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.432000 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.432004 | orchestrator | 2025-11-08 13:56:21.432008 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-11-08 13:56:21.432012 | orchestrator | Saturday 08 November 2025 13:54:26 +0000 (0:00:00.644) 0:00:18.458 ***** 2025-11-08 13:56:21.432015 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.432019 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.432023 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.432027 | orchestrator | 2025-11-08 13:56:21.432030 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-08 13:56:21.432037 | orchestrator | Saturday 08 November 2025 13:54:26 +0000 (0:00:00.398) 0:00:18.856 ***** 2025-11-08 13:56:21.432041 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.432045 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.432048 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.432052 | orchestrator | 2025-11-08 13:56:21.432056 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-08 13:56:21.432060 | orchestrator | Saturday 08 November 2025 13:54:27 +0000 (0:00:00.604) 0:00:19.460 ***** 2025-11-08 13:56:21.432063 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432067 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.432071 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.432075 | orchestrator | 2025-11-08 13:56:21.432078 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-08 13:56:21.432082 | orchestrator | Saturday 08 November 2025 13:54:27 +0000 (0:00:00.253) 0:00:19.714 ***** 2025-11-08 13:56:21.432086 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432089 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.432093 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.432097 | orchestrator | 2025-11-08 13:56:21.432101 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-08 13:56:21.432104 | orchestrator | Saturday 08 November 2025 13:54:27 +0000 (0:00:00.353) 0:00:20.067 ***** 2025-11-08 13:56:21.432108 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432112 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.432116 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.432119 | orchestrator | 2025-11-08 13:56:21.432123 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-11-08 13:56:21.432147 | orchestrator | Saturday 08 November 2025 13:54:28 +0000 (0:00:00.396) 0:00:20.464 ***** 2025-11-08 13:56:21.432151 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-11-08 13:56:21.432156 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-11-08 13:56:21.432160 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-11-08 13:56:21.432164 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-11-08 13:56:21.432169 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-11-08 13:56:21.432173 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-11-08 13:56:21.432177 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-11-08 13:56:21.432181 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-11-08 13:56:21.432185 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-11-08 13:56:21.432190 | orchestrator | 2025-11-08 13:56:21.432194 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-11-08 13:56:21.432198 | orchestrator | Saturday 08 November 2025 13:54:29 +0000 (0:00:00.751) 0:00:21.215 ***** 2025-11-08 13:56:21.432202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-08 13:56:21.432207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-08 13:56:21.432233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-08 13:56:21.432237 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432242 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-08 13:56:21.432246 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-08 13:56:21.432250 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-08 13:56:21.432254 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.432258 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-08 13:56:21.432262 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-08 13:56:21.432267 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-08 13:56:21.432271 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.432275 | orchestrator | 2025-11-08 13:56:21.432280 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-11-08 13:56:21.432287 | orchestrator | Saturday 08 November 2025 13:54:29 +0000 (0:00:00.324) 0:00:21.540 ***** 2025-11-08 13:56:21.432292 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 13:56:21.432296 | orchestrator | 2025-11-08 13:56:21.432300 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-08 13:56:21.432306 | orchestrator | Saturday 08 November 2025 13:54:29 +0000 (0:00:00.613) 0:00:22.154 ***** 2025-11-08 13:56:21.432313 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432317 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.432321 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.432325 | orchestrator | 2025-11-08 13:56:21.432330 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-08 13:56:21.432334 | orchestrator | Saturday 08 November 2025 13:54:30 +0000 (0:00:00.299) 0:00:22.453 ***** 2025-11-08 13:56:21.432338 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432342 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.432346 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.432351 | orchestrator | 2025-11-08 13:56:21.432355 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-08 13:56:21.432359 | orchestrator | Saturday 08 November 2025 13:54:30 +0000 (0:00:00.266) 0:00:22.720 ***** 2025-11-08 13:56:21.432363 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432368 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.432374 | orchestrator | skipping: [testbed-node-5] 2025-11-08 13:56:21.432378 | orchestrator | 2025-11-08 13:56:21.432383 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-08 13:56:21.432387 | orchestrator | Saturday 08 November 2025 13:54:30 +0000 (0:00:00.271) 0:00:22.991 ***** 2025-11-08 13:56:21.432391 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.432396 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.432400 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.432404 | orchestrator | 2025-11-08 13:56:21.432408 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-08 13:56:21.432412 | orchestrator | Saturday 08 November 2025 13:54:31 +0000 (0:00:00.748) 0:00:23.739 ***** 2025-11-08 13:56:21.432417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:56:21.432421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:56:21.432425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:56:21.432429 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432433 | orchestrator | 2025-11-08 13:56:21.432437 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-08 13:56:21.432442 | orchestrator | Saturday 08 November 2025 13:54:31 +0000 (0:00:00.353) 0:00:24.092 ***** 2025-11-08 13:56:21.432446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:56:21.432450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:56:21.432454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:56:21.432459 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432463 | orchestrator | 2025-11-08 13:56:21.432467 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-08 13:56:21.432472 | orchestrator | Saturday 08 November 2025 13:54:32 +0000 (0:00:00.344) 0:00:24.437 ***** 2025-11-08 13:56:21.432476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-08 13:56:21.432481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-08 13:56:21.432485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-08 13:56:21.432489 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432494 | orchestrator | 2025-11-08 13:56:21.432498 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-08 13:56:21.432501 | orchestrator | Saturday 08 November 2025 13:54:32 +0000 (0:00:00.338) 0:00:24.776 ***** 2025-11-08 13:56:21.432508 | orchestrator | ok: [testbed-node-3] 2025-11-08 13:56:21.432512 | orchestrator | ok: [testbed-node-4] 2025-11-08 13:56:21.432516 | orchestrator | ok: [testbed-node-5] 2025-11-08 13:56:21.432519 | orchestrator | 2025-11-08 13:56:21.432523 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-08 13:56:21.432527 | orchestrator | Saturday 08 November 2025 13:54:32 +0000 (0:00:00.289) 0:00:25.065 ***** 2025-11-08 13:56:21.432530 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-08 13:56:21.432534 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-08 13:56:21.432538 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-08 13:56:21.432542 | orchestrator | 2025-11-08 13:56:21.432545 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-11-08 13:56:21.432549 | orchestrator | Saturday 08 November 2025 13:54:33 +0000 (0:00:00.424) 0:00:25.490 ***** 2025-11-08 13:56:21.432553 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-08 13:56:21.432557 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-08 13:56:21.432560 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-08 13:56:21.432564 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-08 13:56:21.432568 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-08 13:56:21.432572 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-08 13:56:21.432575 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-08 13:56:21.432579 | orchestrator | 2025-11-08 13:56:21.432583 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-11-08 13:56:21.432586 | orchestrator | Saturday 08 November 2025 13:54:34 +0000 (0:00:00.929) 0:00:26.420 ***** 2025-11-08 13:56:21.432590 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-08 13:56:21.432594 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-08 13:56:21.432598 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-08 13:56:21.432601 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-08 13:56:21.432605 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-08 13:56:21.432609 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-08 13:56:21.432614 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-08 13:56:21.432618 | orchestrator | 2025-11-08 13:56:21.432622 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-11-08 13:56:21.432626 | orchestrator | Saturday 08 November 2025 13:54:36 +0000 (0:00:01.989) 0:00:28.409 ***** 2025-11-08 13:56:21.432629 | orchestrator | skipping: [testbed-node-3] 2025-11-08 13:56:21.432633 | orchestrator | skipping: [testbed-node-4] 2025-11-08 13:56:21.432637 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-11-08 13:56:21.432641 | orchestrator | 2025-11-08 13:56:21.432644 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-11-08 13:56:21.432648 | orchestrator | Saturday 08 November 2025 13:54:36 +0000 (0:00:00.379) 0:00:28.789 ***** 2025-11-08 13:56:21.432654 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-08 13:56:21.432658 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-08 13:56:21.432665 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-08 13:56:21.432669 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-08 13:56:21.432673 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-08 13:56:21.432677 | orchestrator | 2025-11-08 13:56:21.432680 | orchestrator | TASK [generate keys] *********************************************************** 2025-11-08 13:56:21.432684 | orchestrator | Saturday 08 November 2025 13:55:25 +0000 (0:00:48.437) 0:01:17.226 ***** 2025-11-08 13:56:21.432688 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432692 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432695 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432699 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432703 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432706 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432710 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-11-08 13:56:21.432714 | orchestrator | 2025-11-08 13:56:21.432717 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-11-08 13:56:21.432721 | orchestrator | Saturday 08 November 2025 13:55:50 +0000 (0:00:24.950) 0:01:42.177 ***** 2025-11-08 13:56:21.432725 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432729 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432732 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432736 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432740 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432743 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432747 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-08 13:56:21.432751 | orchestrator | 2025-11-08 13:56:21.432755 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-11-08 13:56:21.432758 | orchestrator | Saturday 08 November 2025 13:56:02 +0000 (0:00:12.192) 0:01:54.369 ***** 2025-11-08 13:56:21.432762 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432766 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-08 13:56:21.432770 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-08 13:56:21.432773 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432777 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-08 13:56:21.432783 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-08 13:56:21.432787 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432794 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-08 13:56:21.432798 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-08 13:56:21.432802 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432806 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-08 13:56:21.432809 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-08 13:56:21.432815 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432819 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-08 13:56:21.432823 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-08 13:56:21.432827 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-08 13:56:21.432830 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-08 13:56:21.432834 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-08 13:56:21.432838 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-11-08 13:56:21.432842 | orchestrator | 2025-11-08 13:56:21.432845 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:56:21.432849 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-11-08 13:56:21.432854 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-11-08 13:56:21.432858 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-11-08 13:56:21.432862 | orchestrator | 2025-11-08 13:56:21.432866 | orchestrator | 2025-11-08 13:56:21.432869 | orchestrator | 2025-11-08 13:56:21.432873 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:56:21.432877 | orchestrator | Saturday 08 November 2025 13:56:19 +0000 (0:00:17.634) 0:02:12.004 ***** 2025-11-08 13:56:21.432880 | orchestrator | =============================================================================== 2025-11-08 13:56:21.432884 | orchestrator | create openstack pool(s) ----------------------------------------------- 48.44s 2025-11-08 13:56:21.432888 | orchestrator | generate keys ---------------------------------------------------------- 24.95s 2025-11-08 13:56:21.432891 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.64s 2025-11-08 13:56:21.432895 | orchestrator | get keys from monitors ------------------------------------------------- 12.19s 2025-11-08 13:56:21.432899 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.20s 2025-11-08 13:56:21.432903 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.99s 2025-11-08 13:56:21.432906 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.79s 2025-11-08 13:56:21.432910 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.95s 2025-11-08 13:56:21.432913 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.93s 2025-11-08 13:56:21.432917 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.79s 2025-11-08 13:56:21.432921 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.75s 2025-11-08 13:56:21.432925 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.75s 2025-11-08 13:56:21.432928 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.68s 2025-11-08 13:56:21.432932 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2025-11-08 13:56:21.432936 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2025-11-08 13:56:21.432942 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2025-11-08 13:56:21.432946 | orchestrator | ceph-facts : Set_fact discovered_interpreter_python if not previously set --- 0.63s 2025-11-08 13:56:21.432950 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.61s 2025-11-08 13:56:21.432954 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.61s 2025-11-08 13:56:21.432957 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.60s 2025-11-08 13:56:21.432961 | orchestrator | 2025-11-08 13:56:21 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:21.432966 | orchestrator | 2025-11-08 13:56:21 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:24.475623 | orchestrator | 2025-11-08 13:56:24 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:24.476619 | orchestrator | 2025-11-08 13:56:24 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:24.477876 | orchestrator | 2025-11-08 13:56:24 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:24.477896 | orchestrator | 2025-11-08 13:56:24 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:27.524672 | orchestrator | 2025-11-08 13:56:27 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:27.527494 | orchestrator | 2025-11-08 13:56:27 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:27.528001 | orchestrator | 2025-11-08 13:56:27 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:27.528044 | orchestrator | 2025-11-08 13:56:27 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:30.569526 | orchestrator | 2025-11-08 13:56:30 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:30.571696 | orchestrator | 2025-11-08 13:56:30 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:30.575619 | orchestrator | 2025-11-08 13:56:30 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:30.575693 | orchestrator | 2025-11-08 13:56:30 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:33.617313 | orchestrator | 2025-11-08 13:56:33 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:33.618390 | orchestrator | 2025-11-08 13:56:33 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:33.619979 | orchestrator | 2025-11-08 13:56:33 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:33.620060 | orchestrator | 2025-11-08 13:56:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:36.666715 | orchestrator | 2025-11-08 13:56:36 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:36.668988 | orchestrator | 2025-11-08 13:56:36 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:36.670938 | orchestrator | 2025-11-08 13:56:36 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:36.671081 | orchestrator | 2025-11-08 13:56:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:39.710352 | orchestrator | 2025-11-08 13:56:39 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:39.710441 | orchestrator | 2025-11-08 13:56:39 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:39.711338 | orchestrator | 2025-11-08 13:56:39 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:39.711393 | orchestrator | 2025-11-08 13:56:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:42.753618 | orchestrator | 2025-11-08 13:56:42 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:42.755066 | orchestrator | 2025-11-08 13:56:42 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:42.758761 | orchestrator | 2025-11-08 13:56:42 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:42.759147 | orchestrator | 2025-11-08 13:56:42 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:45.799612 | orchestrator | 2025-11-08 13:56:45 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:45.801028 | orchestrator | 2025-11-08 13:56:45 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:45.803029 | orchestrator | 2025-11-08 13:56:45 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:45.803136 | orchestrator | 2025-11-08 13:56:45 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:48.850213 | orchestrator | 2025-11-08 13:56:48 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:48.852118 | orchestrator | 2025-11-08 13:56:48 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:48.854266 | orchestrator | 2025-11-08 13:56:48 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:48.854297 | orchestrator | 2025-11-08 13:56:48 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:51.908127 | orchestrator | 2025-11-08 13:56:51 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:51.910272 | orchestrator | 2025-11-08 13:56:51 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:51.915495 | orchestrator | 2025-11-08 13:56:51 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:51.916223 | orchestrator | 2025-11-08 13:56:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:54.953934 | orchestrator | 2025-11-08 13:56:54 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state STARTED 2025-11-08 13:56:54.954812 | orchestrator | 2025-11-08 13:56:54 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:54.955544 | orchestrator | 2025-11-08 13:56:54 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:54.955577 | orchestrator | 2025-11-08 13:56:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:56:58.015756 | orchestrator | 2025-11-08 13:56:58 | INFO  | Task c01dc5a1-f579-4745-af6f-929c7a7cacd7 is in state SUCCESS 2025-11-08 13:56:58.018461 | orchestrator | 2025-11-08 13:56:58 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:56:58.021413 | orchestrator | 2025-11-08 13:56:58 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state STARTED 2025-11-08 13:56:58.021504 | orchestrator | 2025-11-08 13:56:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:01.071572 | orchestrator | 2025-11-08 13:57:01 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:01.073965 | orchestrator | 2025-11-08 13:57:01 | INFO  | Task 79435321-4aaf-466d-9b49-f20c922a86ba is in state SUCCESS 2025-11-08 13:57:01.075499 | orchestrator | 2025-11-08 13:57:01.075548 | orchestrator | 2025-11-08 13:57:01.075562 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-11-08 13:57:01.075603 | orchestrator | 2025-11-08 13:57:01.075618 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-11-08 13:57:01.075631 | orchestrator | Saturday 08 November 2025 13:56:24 +0000 (0:00:00.155) 0:00:00.155 ***** 2025-11-08 13:57:01.075644 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-11-08 13:57:01.075659 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.075672 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.075683 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-11-08 13:57:01.075970 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.075997 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-11-08 13:57:01.076009 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-11-08 13:57:01.076022 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-11-08 13:57:01.076035 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-11-08 13:57:01.076046 | orchestrator | 2025-11-08 13:57:01.076058 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-11-08 13:57:01.076095 | orchestrator | Saturday 08 November 2025 13:56:29 +0000 (0:00:04.739) 0:00:04.894 ***** 2025-11-08 13:57:01.076106 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-11-08 13:57:01.076118 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.076129 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.076141 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-11-08 13:57:01.076153 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.076165 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-11-08 13:57:01.076176 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-11-08 13:57:01.076187 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-11-08 13:57:01.076197 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-11-08 13:57:01.076208 | orchestrator | 2025-11-08 13:57:01.076220 | orchestrator | TASK [Create share directory] ************************************************** 2025-11-08 13:57:01.076231 | orchestrator | Saturday 08 November 2025 13:56:33 +0000 (0:00:04.425) 0:00:09.320 ***** 2025-11-08 13:57:01.076244 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-08 13:57:01.076256 | orchestrator | 2025-11-08 13:57:01.076268 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-11-08 13:57:01.076279 | orchestrator | Saturday 08 November 2025 13:56:34 +0000 (0:00:00.946) 0:00:10.266 ***** 2025-11-08 13:57:01.076291 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-11-08 13:57:01.076302 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.076313 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.076324 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-11-08 13:57:01.076443 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.076459 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-11-08 13:57:01.076486 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-11-08 13:57:01.076498 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-11-08 13:57:01.076511 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-11-08 13:57:01.076522 | orchestrator | 2025-11-08 13:57:01.076548 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-11-08 13:57:01.076560 | orchestrator | Saturday 08 November 2025 13:56:47 +0000 (0:00:13.081) 0:00:23.348 ***** 2025-11-08 13:57:01.076572 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-11-08 13:57:01.076584 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-11-08 13:57:01.076596 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-11-08 13:57:01.076607 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-11-08 13:57:01.076715 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-11-08 13:57:01.076731 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-11-08 13:57:01.076742 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-11-08 13:57:01.076753 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-11-08 13:57:01.076764 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-11-08 13:57:01.076775 | orchestrator | 2025-11-08 13:57:01.076786 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-11-08 13:57:01.076797 | orchestrator | Saturday 08 November 2025 13:56:50 +0000 (0:00:02.872) 0:00:26.220 ***** 2025-11-08 13:57:01.076810 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-11-08 13:57:01.076821 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.076832 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.076844 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-11-08 13:57:01.076854 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-08 13:57:01.076865 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-11-08 13:57:01.076876 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-11-08 13:57:01.076887 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-11-08 13:57:01.076898 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-11-08 13:57:01.076909 | orchestrator | 2025-11-08 13:57:01.076920 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:57:01.076931 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:57:01.076944 | orchestrator | 2025-11-08 13:57:01.076955 | orchestrator | 2025-11-08 13:57:01.076967 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:57:01.076978 | orchestrator | Saturday 08 November 2025 13:56:56 +0000 (0:00:06.304) 0:00:32.524 ***** 2025-11-08 13:57:01.076989 | orchestrator | =============================================================================== 2025-11-08 13:57:01.077000 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.08s 2025-11-08 13:57:01.077012 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.30s 2025-11-08 13:57:01.077023 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.74s 2025-11-08 13:57:01.077049 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.43s 2025-11-08 13:57:01.077060 | orchestrator | Check if target directories exist --------------------------------------- 2.87s 2025-11-08 13:57:01.077125 | orchestrator | Create share directory -------------------------------------------------- 0.95s 2025-11-08 13:57:01.077137 | orchestrator | 2025-11-08 13:57:01.077148 | orchestrator | 2025-11-08 13:57:01.077159 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:57:01.077171 | orchestrator | 2025-11-08 13:57:01.077183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:57:01.077195 | orchestrator | Saturday 08 November 2025 13:55:11 +0000 (0:00:00.247) 0:00:00.247 ***** 2025-11-08 13:57:01.077207 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.077218 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.077229 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.077240 | orchestrator | 2025-11-08 13:57:01.077251 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:57:01.077261 | orchestrator | Saturday 08 November 2025 13:55:12 +0000 (0:00:00.295) 0:00:00.543 ***** 2025-11-08 13:57:01.077272 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-11-08 13:57:01.077284 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-11-08 13:57:01.077295 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-11-08 13:57:01.077306 | orchestrator | 2025-11-08 13:57:01.077317 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-11-08 13:57:01.077327 | orchestrator | 2025-11-08 13:57:01.077338 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-08 13:57:01.077349 | orchestrator | Saturday 08 November 2025 13:55:12 +0000 (0:00:00.403) 0:00:00.946 ***** 2025-11-08 13:57:01.077360 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:57:01.077371 | orchestrator | 2025-11-08 13:57:01.077383 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-11-08 13:57:01.077403 | orchestrator | Saturday 08 November 2025 13:55:13 +0000 (0:00:00.473) 0:00:01.420 ***** 2025-11-08 13:57:01.077441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:57:01.077480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:57:01.077505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:57:01.077526 | orchestrator | 2025-11-08 13:57:01.077537 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-11-08 13:57:01.077549 | orchestrator | Saturday 08 November 2025 13:55:14 +0000 (0:00:01.179) 0:00:02.600 ***** 2025-11-08 13:57:01.077561 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.077572 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.077583 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.077595 | orchestrator | 2025-11-08 13:57:01.077607 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-08 13:57:01.077619 | orchestrator | Saturday 08 November 2025 13:55:14 +0000 (0:00:00.475) 0:00:03.075 ***** 2025-11-08 13:57:01.077631 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-08 13:57:01.077643 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-08 13:57:01.077655 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-11-08 13:57:01.077666 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-11-08 13:57:01.077677 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-11-08 13:57:01.077688 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-11-08 13:57:01.077700 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-11-08 13:57:01.077711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-11-08 13:57:01.077724 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-08 13:57:01.077736 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-08 13:57:01.077748 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-11-08 13:57:01.077759 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-11-08 13:57:01.077770 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-11-08 13:57:01.077782 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-11-08 13:57:01.077799 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-11-08 13:57:01.077811 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-11-08 13:57:01.077822 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-08 13:57:01.077833 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-08 13:57:01.077845 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-11-08 13:57:01.077856 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-11-08 13:57:01.077868 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-11-08 13:57:01.077879 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-11-08 13:57:01.077897 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-11-08 13:57:01.077909 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-11-08 13:57:01.077929 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-11-08 13:57:01.077942 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-11-08 13:57:01.077952 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-11-08 13:57:01.077963 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-11-08 13:57:01.077974 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-11-08 13:57:01.077984 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-11-08 13:57:01.077995 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-11-08 13:57:01.078006 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-11-08 13:57:01.078171 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-11-08 13:57:01.078193 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-11-08 13:57:01.078200 | orchestrator | 2025-11-08 13:57:01.078207 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-08 13:57:01.078215 | orchestrator | Saturday 08 November 2025 13:55:15 +0000 (0:00:00.686) 0:00:03.761 ***** 2025-11-08 13:57:01.078221 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.078229 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.078235 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.078242 | orchestrator | 2025-11-08 13:57:01.078249 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-08 13:57:01.078255 | orchestrator | Saturday 08 November 2025 13:55:15 +0000 (0:00:00.321) 0:00:04.083 ***** 2025-11-08 13:57:01.078262 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078269 | orchestrator | 2025-11-08 13:57:01.078276 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-08 13:57:01.078283 | orchestrator | Saturday 08 November 2025 13:55:15 +0000 (0:00:00.137) 0:00:04.220 ***** 2025-11-08 13:57:01.078289 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078296 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.078303 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.078309 | orchestrator | 2025-11-08 13:57:01.078316 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-08 13:57:01.078331 | orchestrator | Saturday 08 November 2025 13:55:16 +0000 (0:00:00.459) 0:00:04.680 ***** 2025-11-08 13:57:01.078338 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.078345 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.078351 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.078358 | orchestrator | 2025-11-08 13:57:01.078364 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-08 13:57:01.078371 | orchestrator | Saturday 08 November 2025 13:55:16 +0000 (0:00:00.304) 0:00:04.985 ***** 2025-11-08 13:57:01.078377 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078384 | orchestrator | 2025-11-08 13:57:01.078391 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-08 13:57:01.078397 | orchestrator | Saturday 08 November 2025 13:55:16 +0000 (0:00:00.129) 0:00:05.115 ***** 2025-11-08 13:57:01.078412 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078419 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.078425 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.078431 | orchestrator | 2025-11-08 13:57:01.078437 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-08 13:57:01.078444 | orchestrator | Saturday 08 November 2025 13:55:17 +0000 (0:00:00.283) 0:00:05.398 ***** 2025-11-08 13:57:01.078450 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.078461 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.078468 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.078474 | orchestrator | 2025-11-08 13:57:01.078480 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-08 13:57:01.078486 | orchestrator | Saturday 08 November 2025 13:55:17 +0000 (0:00:00.304) 0:00:05.703 ***** 2025-11-08 13:57:01.078493 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078499 | orchestrator | 2025-11-08 13:57:01.078505 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-08 13:57:01.078511 | orchestrator | Saturday 08 November 2025 13:55:17 +0000 (0:00:00.271) 0:00:05.974 ***** 2025-11-08 13:57:01.078517 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078523 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.078529 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.078535 | orchestrator | 2025-11-08 13:57:01.078542 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-08 13:57:01.078557 | orchestrator | Saturday 08 November 2025 13:55:17 +0000 (0:00:00.284) 0:00:06.258 ***** 2025-11-08 13:57:01.078564 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.078570 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.078576 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.078582 | orchestrator | 2025-11-08 13:57:01.078588 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-08 13:57:01.078594 | orchestrator | Saturday 08 November 2025 13:55:18 +0000 (0:00:00.322) 0:00:06.581 ***** 2025-11-08 13:57:01.078600 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078607 | orchestrator | 2025-11-08 13:57:01.078613 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-08 13:57:01.078619 | orchestrator | Saturday 08 November 2025 13:55:18 +0000 (0:00:00.145) 0:00:06.726 ***** 2025-11-08 13:57:01.078625 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078631 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.078637 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.078643 | orchestrator | 2025-11-08 13:57:01.078650 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-08 13:57:01.078656 | orchestrator | Saturday 08 November 2025 13:55:18 +0000 (0:00:00.294) 0:00:07.021 ***** 2025-11-08 13:57:01.078662 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.078668 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.078674 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.078680 | orchestrator | 2025-11-08 13:57:01.078686 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-08 13:57:01.078693 | orchestrator | Saturday 08 November 2025 13:55:19 +0000 (0:00:00.474) 0:00:07.496 ***** 2025-11-08 13:57:01.078699 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078705 | orchestrator | 2025-11-08 13:57:01.078711 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-08 13:57:01.078717 | orchestrator | Saturday 08 November 2025 13:55:19 +0000 (0:00:00.149) 0:00:07.645 ***** 2025-11-08 13:57:01.078724 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078730 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.078736 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.078742 | orchestrator | 2025-11-08 13:57:01.078748 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-08 13:57:01.078754 | orchestrator | Saturday 08 November 2025 13:55:19 +0000 (0:00:00.293) 0:00:07.939 ***** 2025-11-08 13:57:01.078765 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.078771 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.078777 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.078783 | orchestrator | 2025-11-08 13:57:01.078790 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-08 13:57:01.078796 | orchestrator | Saturday 08 November 2025 13:55:19 +0000 (0:00:00.314) 0:00:08.253 ***** 2025-11-08 13:57:01.078802 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078808 | orchestrator | 2025-11-08 13:57:01.078814 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-08 13:57:01.078820 | orchestrator | Saturday 08 November 2025 13:55:20 +0000 (0:00:00.144) 0:00:08.398 ***** 2025-11-08 13:57:01.078826 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078833 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.078839 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.078845 | orchestrator | 2025-11-08 13:57:01.078851 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-08 13:57:01.078857 | orchestrator | Saturday 08 November 2025 13:55:20 +0000 (0:00:00.296) 0:00:08.695 ***** 2025-11-08 13:57:01.078863 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.078869 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.078876 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.078882 | orchestrator | 2025-11-08 13:57:01.078888 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-08 13:57:01.078894 | orchestrator | Saturday 08 November 2025 13:55:20 +0000 (0:00:00.565) 0:00:09.261 ***** 2025-11-08 13:57:01.078900 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078906 | orchestrator | 2025-11-08 13:57:01.078912 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-08 13:57:01.078919 | orchestrator | Saturday 08 November 2025 13:55:21 +0000 (0:00:00.149) 0:00:09.410 ***** 2025-11-08 13:57:01.078925 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.078931 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.078937 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.078943 | orchestrator | 2025-11-08 13:57:01.078949 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-08 13:57:01.078955 | orchestrator | Saturday 08 November 2025 13:55:21 +0000 (0:00:00.317) 0:00:09.728 ***** 2025-11-08 13:57:01.078962 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.078968 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.078974 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.078980 | orchestrator | 2025-11-08 13:57:01.078986 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-08 13:57:01.078992 | orchestrator | Saturday 08 November 2025 13:55:21 +0000 (0:00:00.312) 0:00:10.040 ***** 2025-11-08 13:57:01.078999 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079005 | orchestrator | 2025-11-08 13:57:01.079011 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-08 13:57:01.079021 | orchestrator | Saturday 08 November 2025 13:55:21 +0000 (0:00:00.123) 0:00:10.163 ***** 2025-11-08 13:57:01.079027 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079033 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.079039 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.079045 | orchestrator | 2025-11-08 13:57:01.079052 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-08 13:57:01.079058 | orchestrator | Saturday 08 November 2025 13:55:22 +0000 (0:00:00.293) 0:00:10.457 ***** 2025-11-08 13:57:01.079083 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.079090 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.079096 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.079102 | orchestrator | 2025-11-08 13:57:01.079108 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-08 13:57:01.079114 | orchestrator | Saturday 08 November 2025 13:55:22 +0000 (0:00:00.540) 0:00:10.998 ***** 2025-11-08 13:57:01.079121 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079136 | orchestrator | 2025-11-08 13:57:01.079146 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-08 13:57:01.079153 | orchestrator | Saturday 08 November 2025 13:55:22 +0000 (0:00:00.129) 0:00:11.127 ***** 2025-11-08 13:57:01.079159 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079165 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.079171 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.079178 | orchestrator | 2025-11-08 13:57:01.079184 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-08 13:57:01.079190 | orchestrator | Saturday 08 November 2025 13:55:23 +0000 (0:00:00.346) 0:00:11.474 ***** 2025-11-08 13:57:01.079196 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:57:01.079202 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:57:01.079208 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:57:01.079214 | orchestrator | 2025-11-08 13:57:01.079221 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-08 13:57:01.079227 | orchestrator | Saturday 08 November 2025 13:55:23 +0000 (0:00:00.298) 0:00:11.773 ***** 2025-11-08 13:57:01.079233 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079239 | orchestrator | 2025-11-08 13:57:01.079245 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-08 13:57:01.079252 | orchestrator | Saturday 08 November 2025 13:55:23 +0000 (0:00:00.128) 0:00:11.902 ***** 2025-11-08 13:57:01.079258 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079264 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.079270 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.079276 | orchestrator | 2025-11-08 13:57:01.079282 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-11-08 13:57:01.079289 | orchestrator | Saturday 08 November 2025 13:55:24 +0000 (0:00:00.546) 0:00:12.448 ***** 2025-11-08 13:57:01.079295 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:57:01.079301 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:57:01.079307 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:57:01.079313 | orchestrator | 2025-11-08 13:57:01.079319 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-11-08 13:57:01.079325 | orchestrator | Saturday 08 November 2025 13:55:25 +0000 (0:00:01.576) 0:00:14.025 ***** 2025-11-08 13:57:01.079332 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-08 13:57:01.079338 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-08 13:57:01.079344 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-08 13:57:01.079350 | orchestrator | 2025-11-08 13:57:01.079357 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-11-08 13:57:01.079363 | orchestrator | Saturday 08 November 2025 13:55:27 +0000 (0:00:01.596) 0:00:15.622 ***** 2025-11-08 13:57:01.079369 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-11-08 13:57:01.079376 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-11-08 13:57:01.079383 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-11-08 13:57:01.079389 | orchestrator | 2025-11-08 13:57:01.079395 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-11-08 13:57:01.079401 | orchestrator | Saturday 08 November 2025 13:55:29 +0000 (0:00:02.231) 0:00:17.853 ***** 2025-11-08 13:57:01.079407 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-11-08 13:57:01.079413 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-11-08 13:57:01.079420 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-11-08 13:57:01.079426 | orchestrator | 2025-11-08 13:57:01.079432 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-11-08 13:57:01.079443 | orchestrator | Saturday 08 November 2025 13:55:31 +0000 (0:00:02.384) 0:00:20.238 ***** 2025-11-08 13:57:01.079449 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079455 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.079461 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.079467 | orchestrator | 2025-11-08 13:57:01.079474 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-11-08 13:57:01.079480 | orchestrator | Saturday 08 November 2025 13:55:32 +0000 (0:00:00.305) 0:00:20.544 ***** 2025-11-08 13:57:01.079486 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079492 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.079498 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.079504 | orchestrator | 2025-11-08 13:57:01.079510 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-08 13:57:01.079517 | orchestrator | Saturday 08 November 2025 13:55:32 +0000 (0:00:00.284) 0:00:20.828 ***** 2025-11-08 13:57:01.079526 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:57:01.079533 | orchestrator | 2025-11-08 13:57:01.079539 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-11-08 13:57:01.079545 | orchestrator | Saturday 08 November 2025 13:55:33 +0000 (0:00:00.735) 0:00:21.564 ***** 2025-11-08 13:57:01.079560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:57:01.079573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:57:01.079591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:57:01.079599 | orchestrator | 2025-11-08 13:57:01.079605 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-11-08 13:57:01.079616 | orchestrator | Saturday 08 November 2025 13:55:34 +0000 (0:00:01.654) 0:00:23.218 ***** 2025-11-08 13:57:01.079631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-08 13:57:01.079638 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-08 13:57:01.079658 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.079674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-08 13:57:01.079681 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.079688 | orchestrator | 2025-11-08 13:57:01.079694 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-11-08 13:57:01.079700 | orchestrator | Saturday 08 November 2025 13:55:35 +0000 (0:00:00.632) 0:00:23.850 ***** 2025-11-08 13:57:01.079707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-08 13:57:01.079718 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-08 13:57:01.079742 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.079749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-08 13:57:01.079760 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.079766 | orchestrator | 2025-11-08 13:57:01.079773 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-11-08 13:57:01.079779 | orchestrator | Saturday 08 November 2025 13:55:36 +0000 (0:00:00.903) 0:00:24.753 ***** 2025-11-08 13:57:01.079794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:57:01.079805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:57:01.079823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-08 13:57:01.079837 | orchestrator | 2025-11-08 13:57:01.079843 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-08 13:57:01.079849 | orchestrator | Saturday 08 November 2025 13:55:37 +0000 (0:00:01.471) 0:00:26.225 ***** 2025-11-08 13:57:01.079856 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:57:01.079862 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:57:01.079868 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:57:01.079874 | orchestrator | 2025-11-08 13:57:01.079880 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-08 13:57:01.079886 | orchestrator | Saturday 08 November 2025 13:55:38 +0000 (0:00:00.308) 0:00:26.533 ***** 2025-11-08 13:57:01.079893 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:57:01.079899 | orchestrator | 2025-11-08 13:57:01.079905 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-11-08 13:57:01.079911 | orchestrator | Saturday 08 November 2025 13:55:38 +0000 (0:00:00.524) 0:00:27.057 ***** 2025-11-08 13:57:01.079917 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:57:01.079924 | orchestrator | 2025-11-08 13:57:01.079930 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-11-08 13:57:01.079936 | orchestrator | Saturday 08 November 2025 13:55:41 +0000 (0:00:02.592) 0:00:29.650 ***** 2025-11-08 13:57:01.079942 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:57:01.079948 | orchestrator | 2025-11-08 13:57:01.079954 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-11-08 13:57:01.079961 | orchestrator | Saturday 08 November 2025 13:55:44 +0000 (0:00:02.769) 0:00:32.419 ***** 2025-11-08 13:57:01.079967 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:57:01.079973 | orchestrator | 2025-11-08 13:57:01.079979 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-11-08 13:57:01.079985 | orchestrator | Saturday 08 November 2025 13:56:00 +0000 (0:00:16.285) 0:00:48.705 ***** 2025-11-08 13:57:01.079991 | orchestrator | 2025-11-08 13:57:01.079997 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-11-08 13:57:01.080003 | orchestrator | Saturday 08 November 2025 13:56:00 +0000 (0:00:00.070) 0:00:48.775 ***** 2025-11-08 13:57:01.080010 | orchestrator | 2025-11-08 13:57:01.080016 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-11-08 13:57:01.080022 | orchestrator | Saturday 08 November 2025 13:56:00 +0000 (0:00:00.067) 0:00:48.843 ***** 2025-11-08 13:57:01.080028 | orchestrator | 2025-11-08 13:57:01.080034 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-11-08 13:57:01.080040 | orchestrator | Saturday 08 November 2025 13:56:00 +0000 (0:00:00.070) 0:00:48.913 ***** 2025-11-08 13:57:01.080046 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:57:01.080056 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:57:01.080062 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:57:01.080079 | orchestrator | 2025-11-08 13:57:01.080085 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:57:01.080091 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-11-08 13:57:01.080098 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-08 13:57:01.080104 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-08 13:57:01.080110 | orchestrator | 2025-11-08 13:57:01.080117 | orchestrator | 2025-11-08 13:57:01.080126 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:57:01.080133 | orchestrator | Saturday 08 November 2025 13:57:00 +0000 (0:01:00.028) 0:01:48.942 ***** 2025-11-08 13:57:01.080143 | orchestrator | =============================================================================== 2025-11-08 13:57:01.080150 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.03s 2025-11-08 13:57:01.080156 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.29s 2025-11-08 13:57:01.080162 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.77s 2025-11-08 13:57:01.080168 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.59s 2025-11-08 13:57:01.080174 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.38s 2025-11-08 13:57:01.080180 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.23s 2025-11-08 13:57:01.080186 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.65s 2025-11-08 13:57:01.080192 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.60s 2025-11-08 13:57:01.080198 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.58s 2025-11-08 13:57:01.080204 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.47s 2025-11-08 13:57:01.080210 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.18s 2025-11-08 13:57:01.080217 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.90s 2025-11-08 13:57:01.080223 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-11-08 13:57:01.080229 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2025-11-08 13:57:01.080235 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.63s 2025-11-08 13:57:01.080241 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2025-11-08 13:57:01.080247 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2025-11-08 13:57:01.080253 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-11-08 13:57:01.080260 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-11-08 13:57:01.080266 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.48s 2025-11-08 13:57:01.080272 | orchestrator | 2025-11-08 13:57:01 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:01.080278 | orchestrator | 2025-11-08 13:57:01 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:04.128609 | orchestrator | 2025-11-08 13:57:04 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:04.129805 | orchestrator | 2025-11-08 13:57:04 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:04.129845 | orchestrator | 2025-11-08 13:57:04 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:07.176141 | orchestrator | 2025-11-08 13:57:07 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:07.177107 | orchestrator | 2025-11-08 13:57:07 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:07.177248 | orchestrator | 2025-11-08 13:57:07 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:10.217571 | orchestrator | 2025-11-08 13:57:10 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:10.219435 | orchestrator | 2025-11-08 13:57:10 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:10.219481 | orchestrator | 2025-11-08 13:57:10 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:13.262163 | orchestrator | 2025-11-08 13:57:13 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:13.265773 | orchestrator | 2025-11-08 13:57:13 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:13.265875 | orchestrator | 2025-11-08 13:57:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:16.301585 | orchestrator | 2025-11-08 13:57:16 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:16.302721 | orchestrator | 2025-11-08 13:57:16 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:16.303304 | orchestrator | 2025-11-08 13:57:16 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:19.352314 | orchestrator | 2025-11-08 13:57:19 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:19.353028 | orchestrator | 2025-11-08 13:57:19 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:19.353981 | orchestrator | 2025-11-08 13:57:19 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:22.395748 | orchestrator | 2025-11-08 13:57:22 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:22.398466 | orchestrator | 2025-11-08 13:57:22 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:22.398516 | orchestrator | 2025-11-08 13:57:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:25.446980 | orchestrator | 2025-11-08 13:57:25 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:25.450693 | orchestrator | 2025-11-08 13:57:25 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:25.450744 | orchestrator | 2025-11-08 13:57:25 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:28.487657 | orchestrator | 2025-11-08 13:57:28 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:28.488567 | orchestrator | 2025-11-08 13:57:28 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:28.488601 | orchestrator | 2025-11-08 13:57:28 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:31.534781 | orchestrator | 2025-11-08 13:57:31 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:31.535565 | orchestrator | 2025-11-08 13:57:31 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:31.535821 | orchestrator | 2025-11-08 13:57:31 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:34.582475 | orchestrator | 2025-11-08 13:57:34 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:34.584523 | orchestrator | 2025-11-08 13:57:34 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:34.584572 | orchestrator | 2025-11-08 13:57:34 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:37.619707 | orchestrator | 2025-11-08 13:57:37 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:37.621070 | orchestrator | 2025-11-08 13:57:37 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:37.621130 | orchestrator | 2025-11-08 13:57:37 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:40.665738 | orchestrator | 2025-11-08 13:57:40 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:40.667397 | orchestrator | 2025-11-08 13:57:40 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:40.667449 | orchestrator | 2025-11-08 13:57:40 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:43.710511 | orchestrator | 2025-11-08 13:57:43 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:43.711572 | orchestrator | 2025-11-08 13:57:43 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:43.711664 | orchestrator | 2025-11-08 13:57:43 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:46.755453 | orchestrator | 2025-11-08 13:57:46 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:46.756841 | orchestrator | 2025-11-08 13:57:46 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:46.756862 | orchestrator | 2025-11-08 13:57:46 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:49.807876 | orchestrator | 2025-11-08 13:57:49 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:49.809260 | orchestrator | 2025-11-08 13:57:49 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:49.809296 | orchestrator | 2025-11-08 13:57:49 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:52.851165 | orchestrator | 2025-11-08 13:57:52 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:52.851690 | orchestrator | 2025-11-08 13:57:52 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:52.851722 | orchestrator | 2025-11-08 13:57:52 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:55.882732 | orchestrator | 2025-11-08 13:57:55 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:55.884487 | orchestrator | 2025-11-08 13:57:55 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state STARTED 2025-11-08 13:57:55.884531 | orchestrator | 2025-11-08 13:57:55 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:57:58.923396 | orchestrator | 2025-11-08 13:57:58 | INFO  | Task cfb942d5-0d1a-45ea-8edf-40618b84a6dc is in state STARTED 2025-11-08 13:57:58.926102 | orchestrator | 2025-11-08 13:57:58 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:57:58.928337 | orchestrator | 2025-11-08 13:57:58 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:57:58.931533 | orchestrator | 2025-11-08 13:57:58 | INFO  | Task 33faeb72-0734-48c4-b27e-dc0d67ed43cf is in state SUCCESS 2025-11-08 13:57:58.933186 | orchestrator | 2025-11-08 13:57:58 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:57:58.933413 | orchestrator | 2025-11-08 13:57:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:01.982269 | orchestrator | 2025-11-08 13:58:01 | INFO  | Task cfb942d5-0d1a-45ea-8edf-40618b84a6dc is in state STARTED 2025-11-08 13:58:01.984788 | orchestrator | 2025-11-08 13:58:01 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state STARTED 2025-11-08 13:58:01.985443 | orchestrator | 2025-11-08 13:58:01 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:01.986524 | orchestrator | 2025-11-08 13:58:01 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:01.986814 | orchestrator | 2025-11-08 13:58:01 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:05.021801 | orchestrator | 2025-11-08 13:58:05 | INFO  | Task cfb942d5-0d1a-45ea-8edf-40618b84a6dc is in state SUCCESS 2025-11-08 13:58:05.021931 | orchestrator | 2025-11-08 13:58:05 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:05.022901 | orchestrator | 2025-11-08 13:58:05 | INFO  | Task b0cfeca3-fc5d-4499-91fd-3ce22fc96520 is in state SUCCESS 2025-11-08 13:58:05.025576 | orchestrator | 2025-11-08 13:58:05.025628 | orchestrator | 2025-11-08 13:58:05.025642 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-11-08 13:58:05.025656 | orchestrator | 2025-11-08 13:58:05.025669 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-11-08 13:58:05.025682 | orchestrator | Saturday 08 November 2025 13:57:01 +0000 (0:00:00.221) 0:00:00.221 ***** 2025-11-08 13:58:05.025696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-11-08 13:58:05.025713 | orchestrator | 2025-11-08 13:58:05.025728 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-11-08 13:58:05.025741 | orchestrator | Saturday 08 November 2025 13:57:01 +0000 (0:00:00.215) 0:00:00.437 ***** 2025-11-08 13:58:05.025754 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-11-08 13:58:05.025765 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-11-08 13:58:05.025778 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-11-08 13:58:05.025791 | orchestrator | 2025-11-08 13:58:05.025803 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-11-08 13:58:05.025815 | orchestrator | Saturday 08 November 2025 13:57:02 +0000 (0:00:01.311) 0:00:01.749 ***** 2025-11-08 13:58:05.025828 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-11-08 13:58:05.025842 | orchestrator | 2025-11-08 13:58:05.025854 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-11-08 13:58:05.025866 | orchestrator | Saturday 08 November 2025 13:57:04 +0000 (0:00:01.473) 0:00:03.222 ***** 2025-11-08 13:58:05.025878 | orchestrator | changed: [testbed-manager] 2025-11-08 13:58:05.025891 | orchestrator | 2025-11-08 13:58:05.025904 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-11-08 13:58:05.025917 | orchestrator | Saturday 08 November 2025 13:57:05 +0000 (0:00:00.876) 0:00:04.099 ***** 2025-11-08 13:58:05.025929 | orchestrator | changed: [testbed-manager] 2025-11-08 13:58:05.025941 | orchestrator | 2025-11-08 13:58:05.025956 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-11-08 13:58:05.025968 | orchestrator | Saturday 08 November 2025 13:57:05 +0000 (0:00:00.789) 0:00:04.888 ***** 2025-11-08 13:58:05.026007 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-11-08 13:58:05.026073 | orchestrator | ok: [testbed-manager] 2025-11-08 13:58:05.026088 | orchestrator | 2025-11-08 13:58:05.026102 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-11-08 13:58:05.026116 | orchestrator | Saturday 08 November 2025 13:57:47 +0000 (0:00:42.023) 0:00:46.912 ***** 2025-11-08 13:58:05.026150 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-11-08 13:58:05.026166 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-11-08 13:58:05.026179 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-11-08 13:58:05.026189 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-11-08 13:58:05.026198 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-11-08 13:58:05.026208 | orchestrator | 2025-11-08 13:58:05.026217 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-11-08 13:58:05.026228 | orchestrator | Saturday 08 November 2025 13:57:52 +0000 (0:00:04.018) 0:00:50.930 ***** 2025-11-08 13:58:05.026236 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-11-08 13:58:05.026246 | orchestrator | 2025-11-08 13:58:05.026255 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-11-08 13:58:05.026265 | orchestrator | Saturday 08 November 2025 13:57:52 +0000 (0:00:00.462) 0:00:51.393 ***** 2025-11-08 13:58:05.026274 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:58:05.026283 | orchestrator | 2025-11-08 13:58:05.026292 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-11-08 13:58:05.026317 | orchestrator | Saturday 08 November 2025 13:57:52 +0000 (0:00:00.126) 0:00:51.519 ***** 2025-11-08 13:58:05.026326 | orchestrator | skipping: [testbed-manager] 2025-11-08 13:58:05.026335 | orchestrator | 2025-11-08 13:58:05.026345 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-11-08 13:58:05.026354 | orchestrator | Saturday 08 November 2025 13:57:53 +0000 (0:00:00.495) 0:00:52.014 ***** 2025-11-08 13:58:05.026363 | orchestrator | changed: [testbed-manager] 2025-11-08 13:58:05.026372 | orchestrator | 2025-11-08 13:58:05.026381 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-11-08 13:58:05.026391 | orchestrator | Saturday 08 November 2025 13:57:54 +0000 (0:00:01.603) 0:00:53.617 ***** 2025-11-08 13:58:05.026400 | orchestrator | changed: [testbed-manager] 2025-11-08 13:58:05.026409 | orchestrator | 2025-11-08 13:58:05.026418 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-11-08 13:58:05.026427 | orchestrator | Saturday 08 November 2025 13:57:55 +0000 (0:00:00.766) 0:00:54.384 ***** 2025-11-08 13:58:05.026441 | orchestrator | changed: [testbed-manager] 2025-11-08 13:58:05.026453 | orchestrator | 2025-11-08 13:58:05.026465 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-11-08 13:58:05.026477 | orchestrator | Saturday 08 November 2025 13:57:56 +0000 (0:00:00.630) 0:00:55.014 ***** 2025-11-08 13:58:05.026491 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-11-08 13:58:05.026505 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-11-08 13:58:05.026519 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-11-08 13:58:05.026532 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-11-08 13:58:05.026545 | orchestrator | 2025-11-08 13:58:05.026558 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:58:05.026571 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 13:58:05.026585 | orchestrator | 2025-11-08 13:58:05.026598 | orchestrator | 2025-11-08 13:58:05.026632 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:58:05.026648 | orchestrator | Saturday 08 November 2025 13:57:57 +0000 (0:00:01.408) 0:00:56.422 ***** 2025-11-08 13:58:05.026661 | orchestrator | =============================================================================== 2025-11-08 13:58:05.026675 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.02s 2025-11-08 13:58:05.026684 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.02s 2025-11-08 13:58:05.026692 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.60s 2025-11-08 13:58:05.026700 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.47s 2025-11-08 13:58:05.026708 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.41s 2025-11-08 13:58:05.026715 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.31s 2025-11-08 13:58:05.026723 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.88s 2025-11-08 13:58:05.026731 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.79s 2025-11-08 13:58:05.026739 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2025-11-08 13:58:05.026746 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.63s 2025-11-08 13:58:05.026754 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.50s 2025-11-08 13:58:05.026762 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2025-11-08 13:58:05.026769 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-11-08 13:58:05.026777 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-11-08 13:58:05.026785 | orchestrator | 2025-11-08 13:58:05.026793 | orchestrator | 2025-11-08 13:58:05.026801 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:58:05.026817 | orchestrator | 2025-11-08 13:58:05.026825 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:58:05.026833 | orchestrator | Saturday 08 November 2025 13:58:01 +0000 (0:00:00.182) 0:00:00.183 ***** 2025-11-08 13:58:05.026845 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:58:05.026857 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:58:05.026870 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:58:05.026883 | orchestrator | 2025-11-08 13:58:05.026896 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:58:05.026909 | orchestrator | Saturday 08 November 2025 13:58:02 +0000 (0:00:00.369) 0:00:00.552 ***** 2025-11-08 13:58:05.026922 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-11-08 13:58:05.026935 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-11-08 13:58:05.026956 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-11-08 13:58:05.026988 | orchestrator | 2025-11-08 13:58:05.027001 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-11-08 13:58:05.027016 | orchestrator | 2025-11-08 13:58:05.027028 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-11-08 13:58:05.027036 | orchestrator | Saturday 08 November 2025 13:58:03 +0000 (0:00:01.133) 0:00:01.685 ***** 2025-11-08 13:58:05.027044 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:58:05.027051 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:58:05.027059 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:58:05.027067 | orchestrator | 2025-11-08 13:58:05.027075 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:58:05.027084 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:58:05.027092 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:58:05.027100 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 13:58:05.027108 | orchestrator | 2025-11-08 13:58:05.027116 | orchestrator | 2025-11-08 13:58:05.027124 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:58:05.027132 | orchestrator | Saturday 08 November 2025 13:58:04 +0000 (0:00:00.748) 0:00:02.434 ***** 2025-11-08 13:58:05.027139 | orchestrator | =============================================================================== 2025-11-08 13:58:05.027147 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2025-11-08 13:58:05.027155 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.75s 2025-11-08 13:58:05.027163 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-11-08 13:58:05.027171 | orchestrator | 2025-11-08 13:58:05.027178 | orchestrator | 2025-11-08 13:58:05.027186 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 13:58:05.027194 | orchestrator | 2025-11-08 13:58:05.027202 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 13:58:05.027209 | orchestrator | Saturday 08 November 2025 13:55:11 +0000 (0:00:00.255) 0:00:00.255 ***** 2025-11-08 13:58:05.027217 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:58:05.027225 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:58:05.027233 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:58:05.027241 | orchestrator | 2025-11-08 13:58:05.027249 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 13:58:05.027256 | orchestrator | Saturday 08 November 2025 13:55:12 +0000 (0:00:00.279) 0:00:00.534 ***** 2025-11-08 13:58:05.027264 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-11-08 13:58:05.027272 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-11-08 13:58:05.027280 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-11-08 13:58:05.027288 | orchestrator | 2025-11-08 13:58:05.027296 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-11-08 13:58:05.027311 | orchestrator | 2025-11-08 13:58:05.027327 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-08 13:58:05.027335 | orchestrator | Saturday 08 November 2025 13:55:12 +0000 (0:00:00.454) 0:00:00.989 ***** 2025-11-08 13:58:05.027344 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:58:05.027351 | orchestrator | 2025-11-08 13:58:05.027359 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-11-08 13:58:05.027367 | orchestrator | Saturday 08 November 2025 13:55:13 +0000 (0:00:00.508) 0:00:01.498 ***** 2025-11-08 13:58:05.027382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.027400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.027410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.027419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027491 | orchestrator | 2025-11-08 13:58:05.027499 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-11-08 13:58:05.027507 | orchestrator | Saturday 08 November 2025 13:55:15 +0000 (0:00:02.029) 0:00:03.527 ***** 2025-11-08 13:58:05.027515 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-11-08 13:58:05.027523 | orchestrator | 2025-11-08 13:58:05.027531 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-11-08 13:58:05.027544 | orchestrator | Saturday 08 November 2025 13:55:16 +0000 (0:00:00.852) 0:00:04.379 ***** 2025-11-08 13:58:05.027552 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:58:05.027560 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:58:05.027568 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:58:05.027576 | orchestrator | 2025-11-08 13:58:05.027583 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-11-08 13:58:05.027591 | orchestrator | Saturday 08 November 2025 13:55:16 +0000 (0:00:00.457) 0:00:04.837 ***** 2025-11-08 13:58:05.027599 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 13:58:05.027607 | orchestrator | 2025-11-08 13:58:05.027615 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-08 13:58:05.027622 | orchestrator | Saturday 08 November 2025 13:55:17 +0000 (0:00:00.679) 0:00:05.517 ***** 2025-11-08 13:58:05.027631 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:58:05.027639 | orchestrator | 2025-11-08 13:58:05.027651 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-11-08 13:58:05.027659 | orchestrator | Saturday 08 November 2025 13:55:17 +0000 (0:00:00.498) 0:00:06.015 ***** 2025-11-08 13:58:05.027668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.027681 | orchestrator | [0;33mchanged: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.027691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.027704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.027838 | orchestrator | 2025-11-08 13:58:05.027857 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-11-08 13:58:05.027870 | orchestrator | Saturday 08 November 2025 13:55:20 +0000 (0:00:03.353) 0:00:09.369 ***** 2025-11-08 13:58:05.027884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-08 13:58:05.027907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.027921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:58:05.027935 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.027962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-08 13:58:05.028034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.028060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:58:05.028072 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:58:05.028097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-08 13:58:05.028113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.028127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:58:05.028140 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:58:05.028153 | orchestrator | 2025-11-08 13:58:05.028167 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-11-08 13:58:05.028181 | orchestrator | Saturday 08 November 2025 13:55:21 +0000 (0:00:00.873) 0:00:10.242 ***** 2025-11-08 13:58:05.028201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-08 13:58:05.028224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.028239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:58:05.028252 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.029359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'grou2025-11-08 13:58:05 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:05.029396 | orchestrator | 2025-11-08 13:58:05 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:05.029410 | orchestrator | 2025-11-08 13:58:05 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:05.029572 | orchestrator | 2025-11-08 13:58:05 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:05.030194 | orchestrator | p': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-08 13:58:05.030377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.030404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:58:05.030419 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:58:05.030435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-08 13:58:05.030450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.030474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-08 13:58:05.030488 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:58:05.030502 | orchestrator | 2025-11-08 13:58:05.030516 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-11-08 13:58:05.030529 | orchestrator | Saturday 08 November 2025 13:55:22 +0000 (0:00:00.760) 0:00:11.003 ***** 2025-11-08 13:58:05.030548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.030571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.030586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.030608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.030622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.030636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.030662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.030676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.030690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.030704 | orchestrator | 2025-11-08 13:58:05.030717 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-11-08 13:58:05.030730 | orchestrator | Saturday 08 November 2025 13:55:25 +0000 (0:00:03.344) 0:00:14.347 ***** 2025-11-08 13:58:05.030753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.030768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.030794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.030808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.030823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.030837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.030859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.030873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.030902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.030916 | orchestrator | 2025-11-08 13:58:05.030930 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-11-08 13:58:05.030943 | orchestrator | Saturday 08 November 2025 13:55:31 +0000 (0:00:05.535) 0:00:19.882 ***** 2025-11-08 13:58:05.030956 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:58:05.031027 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:58:05.031045 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:58:05.031059 | orchestrator | 2025-11-08 13:58:05.031073 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-11-08 13:58:05.031087 | orchestrator | Saturday 08 November 2025 13:55:32 +0000 (0:00:01.438) 0:00:21.321 ***** 2025-11-08 13:58:05.031101 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.031115 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:58:05.031129 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:58:05.031144 | orchestrator | 2025-11-08 13:58:05.031158 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-11-08 13:58:05.031172 | orchestrator | Saturday 08 November 2025 13:55:33 +0000 (0:00:00.508) 0:00:21.830 ***** 2025-11-08 13:58:05.031186 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.031200 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:58:05.031214 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:58:05.031227 | orchestrator | 2025-11-08 13:58:05.031241 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-11-08 13:58:05.031255 | orchestrator | Saturday 08 November 2025 13:55:33 +0000 (0:00:00.326) 0:00:22.156 ***** 2025-11-08 13:58:05.031269 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.031284 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:58:05.031298 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:58:05.031311 | orchestrator | 2025-11-08 13:58:05.031324 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-11-08 13:58:05.031337 | orchestrator | Saturday 08 November 2025 13:55:34 +0000 (0:00:00.475) 0:00:22.632 ***** 2025-11-08 13:58:05.031352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.031383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.031409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.031424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.031439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.031454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-08 13:58:05.031485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.031500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.031517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.031530 | orchestrator | 2025-11-08 13:58:05.031543 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-08 13:58:05.031556 | orchestrator | Saturday 08 November 2025 13:55:36 +0000 (0:00:02.425) 0:00:25.057 ***** 2025-11-08 13:58:05.031569 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.031582 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:58:05.031595 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:58:05.031608 | orchestrator | 2025-11-08 13:58:05.031621 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-11-08 13:58:05.031635 | orchestrator | Saturday 08 November 2025 13:55:36 +0000 (0:00:00.299) 0:00:25.357 ***** 2025-11-08 13:58:05.031648 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-08 13:58:05.031662 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-08 13:58:05.031670 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-08 13:58:05.031685 | orchestrator | 2025-11-08 13:58:05.031697 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-11-08 13:58:05.031710 | orchestrator | Saturday 08 November 2025 13:55:38 +0000 (0:00:01.620) 0:00:26.978 ***** 2025-11-08 13:58:05.031723 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 13:58:05.031737 | orchestrator | 2025-11-08 13:58:05.031751 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-11-08 13:58:05.031764 | orchestrator | Saturday 08 November 2025 13:55:39 +0000 (0:00:00.939) 0:00:27.917 ***** 2025-11-08 13:58:05.031778 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.031790 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:58:05.031803 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:58:05.031811 | orchestrator | 2025-11-08 13:58:05.031819 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-11-08 13:58:05.031834 | orchestrator | Saturday 08 November 2025 13:55:40 +0000 (0:00:00.781) 0:00:28.699 ***** 2025-11-08 13:58:05.031842 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 13:58:05.031850 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-08 13:58:05.031858 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-08 13:58:05.031865 | orchestrator | 2025-11-08 13:58:05.031873 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-11-08 13:58:05.031887 | orchestrator | Saturday 08 November 2025 13:55:41 +0000 (0:00:01.127) 0:00:29.826 ***** 2025-11-08 13:58:05.031900 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:58:05.031913 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:58:05.031927 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:58:05.031940 | orchestrator | 2025-11-08 13:58:05.031954 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-11-08 13:58:05.031967 | orchestrator | Saturday 08 November 2025 13:55:41 +0000 (0:00:00.296) 0:00:30.122 ***** 2025-11-08 13:58:05.032004 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-08 13:58:05.032017 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-08 13:58:05.032026 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-08 13:58:05.032034 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-08 13:58:05.032042 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-08 13:58:05.032056 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-08 13:58:05.032064 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-08 13:58:05.032072 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-08 13:58:05.032080 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-08 13:58:05.032088 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-08 13:58:05.032095 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-08 13:58:05.032103 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-08 13:58:05.032111 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-08 13:58:05.032123 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-08 13:58:05.032137 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-08 13:58:05.032150 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-08 13:58:05.032163 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-08 13:58:05.032174 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-08 13:58:05.032187 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-08 13:58:05.032201 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-08 13:58:05.032221 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-08 13:58:05.032234 | orchestrator | 2025-11-08 13:58:05.032245 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-11-08 13:58:05.032253 | orchestrator | Saturday 08 November 2025 13:55:51 +0000 (0:00:09.325) 0:00:39.448 ***** 2025-11-08 13:58:05.032261 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-08 13:58:05.032275 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-08 13:58:05.032283 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-08 13:58:05.032291 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-08 13:58:05.032299 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-08 13:58:05.032307 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-08 13:58:05.032314 | orchestrator | 2025-11-08 13:58:05.032322 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-11-08 13:58:05.032330 | orchestrator | Saturday 08 November 2025 13:55:53 +0000 (0:00:02.875) 0:00:42.323 ***** 2025-11-08 13:58:05.032339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.032358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.032373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-08 13:58:05.032393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.032430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.032446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-08 13:58:05.032460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.032477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.032486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-08 13:58:05.032494 | orchestrator | 2025-11-08 13:58:05.032502 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-08 13:58:05.032510 | orchestrator | Saturday 08 November 2025 13:55:56 +0000 (0:00:02.297) 0:00:44.621 ***** 2025-11-08 13:58:05.032518 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.032526 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:58:05.032540 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:58:05.032548 | orchestrator | 2025-11-08 13:58:05.032556 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-11-08 13:58:05.032564 | orchestrator | Saturday 08 November 2025 13:55:56 +0000 (0:00:00.290) 0:00:44.911 ***** 2025-11-08 13:58:05.032572 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:58:05.032580 | orchestrator | 2025-11-08 13:58:05.032592 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-11-08 13:58:05.032600 | orchestrator | Saturday 08 November 2025 13:55:58 +0000 (0:00:02.362) 0:00:47.274 ***** 2025-11-08 13:58:05.032608 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:58:05.032616 | orchestrator | 2025-11-08 13:58:05.032623 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-11-08 13:58:05.032631 | orchestrator | Saturday 08 November 2025 13:56:01 +0000 (0:00:02.259) 0:00:49.533 ***** 2025-11-08 13:58:05.032639 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:58:05.032647 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:58:05.032655 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:58:05.032663 | orchestrator | 2025-11-08 13:58:05.032670 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-11-08 13:58:05.032678 | orchestrator | Saturday 08 November 2025 13:56:02 +0000 (0:00:00.872) 0:00:50.406 ***** 2025-11-08 13:58:05.032686 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:58:05.032694 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:58:05.032702 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:58:05.032709 | orchestrator | 2025-11-08 13:58:05.032717 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-11-08 13:58:05.032725 | orchestrator | Saturday 08 November 2025 13:56:02 +0000 (0:00:00.635) 0:00:51.042 ***** 2025-11-08 13:58:05.032733 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.032741 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:58:05.032775 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:58:05.032783 | orchestrator | 2025-11-08 13:58:05.032791 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-11-08 13:58:05.032799 | orchestrator | Saturday 08 November 2025 13:56:03 +0000 (0:00:00.346) 0:00:51.389 ***** 2025-11-08 13:58:05.032807 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:58:05.032815 | orchestrator | 2025-11-08 13:58:05.032822 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-11-08 13:58:05.032830 | orchestrator | Saturday 08 November 2025 13:56:17 +0000 (0:00:14.662) 0:01:06.051 ***** 2025-11-08 13:58:05.032838 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:58:05.032846 | orchestrator | 2025-11-08 13:58:05.032854 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-08 13:58:05.032862 | orchestrator | Saturday 08 November 2025 13:56:28 +0000 (0:00:10.790) 0:01:16.841 ***** 2025-11-08 13:58:05.032869 | orchestrator | 2025-11-08 13:58:05.032881 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-08 13:58:05.032896 | orchestrator | Saturday 08 November 2025 13:56:28 +0000 (0:00:00.086) 0:01:16.928 ***** 2025-11-08 13:58:05.032910 | orchestrator | 2025-11-08 13:58:05.032925 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-08 13:58:05.032940 | orchestrator | Saturday 08 November 2025 13:56:28 +0000 (0:00:00.078) 0:01:17.006 ***** 2025-11-08 13:58:05.032956 | orchestrator | 2025-11-08 13:58:05.032964 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-11-08 13:58:05.033014 | orchestrator | Saturday 08 November 2025 13:56:28 +0000 (0:00:00.079) 0:01:17.086 ***** 2025-11-08 13:58:05.033024 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:58:05.033032 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:58:05.033040 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:58:05.033048 | orchestrator | 2025-11-08 13:58:05.033056 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-11-08 13:58:05.033063 | orchestrator | Saturday 08 November 2025 13:56:53 +0000 (0:00:24.978) 0:01:42.064 ***** 2025-11-08 13:58:05.033078 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:58:05.033086 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:58:05.033093 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:58:05.033101 | orchestrator | 2025-11-08 13:58:05.033109 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-11-08 13:58:05.033117 | orchestrator | Saturday 08 November 2025 13:57:04 +0000 (0:00:11.137) 0:01:53.201 ***** 2025-11-08 13:58:05.033125 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:58:05.033133 | orchestrator | changed: [testbed-node-2] 2025-11-08 13:58:05.033146 | orchestrator | changed: [testbed-node-1] 2025-11-08 13:58:05.033154 | orchestrator | 2025-11-08 13:58:05.033162 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-08 13:58:05.033170 | orchestrator | Saturday 08 November 2025 13:57:12 +0000 (0:00:07.478) 0:02:00.679 ***** 2025-11-08 13:58:05.033178 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 13:58:05.033186 | orchestrator | 2025-11-08 13:58:05.033194 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-11-08 13:58:05.033201 | orchestrator | Saturday 08 November 2025 13:57:13 +0000 (0:00:00.727) 0:02:01.407 ***** 2025-11-08 13:58:05.033209 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:58:05.033217 | orchestrator | ok: [testbed-node-1] 2025-11-08 13:58:05.033225 | orchestrator | ok: [testbed-node-2] 2025-11-08 13:58:05.033232 | orchestrator | 2025-11-08 13:58:05.033240 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-11-08 13:58:05.033248 | orchestrator | Saturday 08 November 2025 13:57:13 +0000 (0:00:00.815) 0:02:02.222 ***** 2025-11-08 13:58:05.033256 | orchestrator | changed: [testbed-node-0] 2025-11-08 13:58:05.033264 | orchestrator | 2025-11-08 13:58:05.033271 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-11-08 13:58:05.033279 | orchestrator | Saturday 08 November 2025 13:57:15 +0000 (0:00:01.858) 0:02:04.081 ***** 2025-11-08 13:58:05.033287 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-11-08 13:58:05.033295 | orchestrator | 2025-11-08 13:58:05.033303 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-11-08 13:58:05.033311 | orchestrator | Saturday 08 November 2025 13:57:26 +0000 (0:00:11.209) 0:02:15.290 ***** 2025-11-08 13:58:05.033318 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-11-08 13:58:05.033326 | orchestrator | 2025-11-08 13:58:05.033334 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-11-08 13:58:05.033342 | orchestrator | Saturday 08 November 2025 13:57:50 +0000 (0:00:23.534) 0:02:38.824 ***** 2025-11-08 13:58:05.033354 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-11-08 13:58:05.033362 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-11-08 13:58:05.033370 | orchestrator | 2025-11-08 13:58:05.033378 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-11-08 13:58:05.033386 | orchestrator | Saturday 08 November 2025 13:57:57 +0000 (0:00:06.824) 0:02:45.649 ***** 2025-11-08 13:58:05.033400 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.033415 | orchestrator | 2025-11-08 13:58:05.033430 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-11-08 13:58:05.033445 | orchestrator | Saturday 08 November 2025 13:57:57 +0000 (0:00:00.131) 0:02:45.780 ***** 2025-11-08 13:58:05.033460 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.033476 | orchestrator | 2025-11-08 13:58:05.033490 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-11-08 13:58:05.033505 | orchestrator | Saturday 08 November 2025 13:57:57 +0000 (0:00:00.104) 0:02:45.884 ***** 2025-11-08 13:58:05.033518 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.033532 | orchestrator | 2025-11-08 13:58:05.033545 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-11-08 13:58:05.033565 | orchestrator | Saturday 08 November 2025 13:57:57 +0000 (0:00:00.118) 0:02:46.003 ***** 2025-11-08 13:58:05.033574 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.033581 | orchestrator | 2025-11-08 13:58:05.033589 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-11-08 13:58:05.033597 | orchestrator | Saturday 08 November 2025 13:57:58 +0000 (0:00:00.513) 0:02:46.517 ***** 2025-11-08 13:58:05.033605 | orchestrator | ok: [testbed-node-0] 2025-11-08 13:58:05.033613 | orchestrator | 2025-11-08 13:58:05.033620 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-08 13:58:05.033628 | orchestrator | Saturday 08 November 2025 13:58:01 +0000 (0:00:03.436) 0:02:49.954 ***** 2025-11-08 13:58:05.033636 | orchestrator | skipping: [testbed-node-0] 2025-11-08 13:58:05.033643 | orchestrator | skipping: [testbed-node-1] 2025-11-08 13:58:05.033651 | orchestrator | skipping: [testbed-node-2] 2025-11-08 13:58:05.033659 | orchestrator | 2025-11-08 13:58:05.033667 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 13:58:05.033676 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-11-08 13:58:05.033685 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-11-08 13:58:05.033693 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-11-08 13:58:05.033701 | orchestrator | 2025-11-08 13:58:05.033709 | orchestrator | 2025-11-08 13:58:05.033717 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 13:58:05.033725 | orchestrator | Saturday 08 November 2025 13:58:02 +0000 (0:00:00.513) 0:02:50.467 ***** 2025-11-08 13:58:05.033733 | orchestrator | =============================================================================== 2025-11-08 13:58:05.033740 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 24.98s 2025-11-08 13:58:05.033748 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.53s 2025-11-08 13:58:05.033756 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.66s 2025-11-08 13:58:05.033764 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.21s 2025-11-08 13:58:05.033772 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 11.14s 2025-11-08 13:58:05.033785 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.79s 2025-11-08 13:58:05.033794 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.33s 2025-11-08 13:58:05.033801 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.48s 2025-11-08 13:58:05.033809 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.82s 2025-11-08 13:58:05.033817 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.53s 2025-11-08 13:58:05.033825 | orchestrator | keystone : Creating default user role ----------------------------------- 3.44s 2025-11-08 13:58:05.033833 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.35s 2025-11-08 13:58:05.033840 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.34s 2025-11-08 13:58:05.033848 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.88s 2025-11-08 13:58:05.033856 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.43s 2025-11-08 13:58:05.033864 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.36s 2025-11-08 13:58:05.033872 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.30s 2025-11-08 13:58:05.033884 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.26s 2025-11-08 13:58:05.033894 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.03s 2025-11-08 13:58:05.033907 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.86s 2025-11-08 13:58:08.085290 | orchestrator | 2025-11-08 13:58:08 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:08.085363 | orchestrator | 2025-11-08 13:58:08 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:08.085690 | orchestrator | 2025-11-08 13:58:08 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:08.086386 | orchestrator | 2025-11-08 13:58:08 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:08.086843 | orchestrator | 2025-11-08 13:58:08 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:08.086872 | orchestrator | 2025-11-08 13:58:08 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:11.113426 | orchestrator | 2025-11-08 13:58:11 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:11.113670 | orchestrator | 2025-11-08 13:58:11 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:11.114288 | orchestrator | 2025-11-08 13:58:11 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:11.115066 | orchestrator | 2025-11-08 13:58:11 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:11.115789 | orchestrator | 2025-11-08 13:58:11 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:11.115817 | orchestrator | 2025-11-08 13:58:11 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:14.138668 | orchestrator | 2025-11-08 13:58:14 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:14.139128 | orchestrator | 2025-11-08 13:58:14 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:14.140312 | orchestrator | 2025-11-08 13:58:14 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:14.141301 | orchestrator | 2025-11-08 13:58:14 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:14.142384 | orchestrator | 2025-11-08 13:58:14 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:14.142486 | orchestrator | 2025-11-08 13:58:14 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:17.174302 | orchestrator | 2025-11-08 13:58:17 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:17.175181 | orchestrator | 2025-11-08 13:58:17 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:17.176262 | orchestrator | 2025-11-08 13:58:17 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:17.176641 | orchestrator | 2025-11-08 13:58:17 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:17.177789 | orchestrator | 2025-11-08 13:58:17 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:17.177827 | orchestrator | 2025-11-08 13:58:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:20.218197 | orchestrator | 2025-11-08 13:58:20 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:20.219576 | orchestrator | 2025-11-08 13:58:20 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:20.220929 | orchestrator | 2025-11-08 13:58:20 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:20.224285 | orchestrator | 2025-11-08 13:58:20 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:20.230836 | orchestrator | 2025-11-08 13:58:20 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:20.230929 | orchestrator | 2025-11-08 13:58:20 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:23.258415 | orchestrator | 2025-11-08 13:58:23 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:23.259586 | orchestrator | 2025-11-08 13:58:23 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:23.260822 | orchestrator | 2025-11-08 13:58:23 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:23.263411 | orchestrator | 2025-11-08 13:58:23 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:23.264297 | orchestrator | 2025-11-08 13:58:23 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:23.264374 | orchestrator | 2025-11-08 13:58:23 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:26.307835 | orchestrator | 2025-11-08 13:58:26 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:26.309594 | orchestrator | 2025-11-08 13:58:26 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:26.312030 | orchestrator | 2025-11-08 13:58:26 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:26.313773 | orchestrator | 2025-11-08 13:58:26 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:26.315536 | orchestrator | 2025-11-08 13:58:26 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:26.315665 | orchestrator | 2025-11-08 13:58:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:29.341509 | orchestrator | 2025-11-08 13:58:29 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:29.342067 | orchestrator | 2025-11-08 13:58:29 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:29.342989 | orchestrator | 2025-11-08 13:58:29 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:29.343742 | orchestrator | 2025-11-08 13:58:29 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:29.344735 | orchestrator | 2025-11-08 13:58:29 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:29.344804 | orchestrator | 2025-11-08 13:58:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:32.372359 | orchestrator | 2025-11-08 13:58:32 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:32.372629 | orchestrator | 2025-11-08 13:58:32 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:32.373464 | orchestrator | 2025-11-08 13:58:32 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:32.374197 | orchestrator | 2025-11-08 13:58:32 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:32.375139 | orchestrator | 2025-11-08 13:58:32 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:32.375280 | orchestrator | 2025-11-08 13:58:32 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:35.418428 | orchestrator | 2025-11-08 13:58:35 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:35.418518 | orchestrator | 2025-11-08 13:58:35 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:35.418565 | orchestrator | 2025-11-08 13:58:35 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:35.418581 | orchestrator | 2025-11-08 13:58:35 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:35.418595 | orchestrator | 2025-11-08 13:58:35 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:35.418610 | orchestrator | 2025-11-08 13:58:35 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:38.420011 | orchestrator | 2025-11-08 13:58:38 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:38.420942 | orchestrator | 2025-11-08 13:58:38 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:38.420970 | orchestrator | 2025-11-08 13:58:38 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:38.421575 | orchestrator | 2025-11-08 13:58:38 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:38.422742 | orchestrator | 2025-11-08 13:58:38 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:38.422759 | orchestrator | 2025-11-08 13:58:38 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:41.455429 | orchestrator | 2025-11-08 13:58:41 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:41.457368 | orchestrator | 2025-11-08 13:58:41 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:41.458510 | orchestrator | 2025-11-08 13:58:41 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:41.460334 | orchestrator | 2025-11-08 13:58:41 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:41.462202 | orchestrator | 2025-11-08 13:58:41 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:41.462270 | orchestrator | 2025-11-08 13:58:41 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:44.494566 | orchestrator | 2025-11-08 13:58:44 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:44.496349 | orchestrator | 2025-11-08 13:58:44 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:44.498277 | orchestrator | 2025-11-08 13:58:44 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:44.499546 | orchestrator | 2025-11-08 13:58:44 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:44.501519 | orchestrator | 2025-11-08 13:58:44 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:44.502461 | orchestrator | 2025-11-08 13:58:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:47.540440 | orchestrator | 2025-11-08 13:58:47 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:47.542174 | orchestrator | 2025-11-08 13:58:47 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:47.543691 | orchestrator | 2025-11-08 13:58:47 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:47.544514 | orchestrator | 2025-11-08 13:58:47 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:47.546053 | orchestrator | 2025-11-08 13:58:47 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:47.546077 | orchestrator | 2025-11-08 13:58:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:50.579102 | orchestrator | 2025-11-08 13:58:50 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:50.579340 | orchestrator | 2025-11-08 13:58:50 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:50.580113 | orchestrator | 2025-11-08 13:58:50 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:50.581184 | orchestrator | 2025-11-08 13:58:50 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:50.582294 | orchestrator | 2025-11-08 13:58:50 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:50.582319 | orchestrator | 2025-11-08 13:58:50 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:53.608615 | orchestrator | 2025-11-08 13:58:53 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:53.609467 | orchestrator | 2025-11-08 13:58:53 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:53.609599 | orchestrator | 2025-11-08 13:58:53 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:53.609616 | orchestrator | 2025-11-08 13:58:53 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:53.611578 | orchestrator | 2025-11-08 13:58:53 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:53.611603 | orchestrator | 2025-11-08 13:58:53 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:56.637217 | orchestrator | 2025-11-08 13:58:56 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:56.637401 | orchestrator | 2025-11-08 13:58:56 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:56.639211 | orchestrator | 2025-11-08 13:58:56 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:56.639241 | orchestrator | 2025-11-08 13:58:56 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:56.639251 | orchestrator | 2025-11-08 13:58:56 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:56.639261 | orchestrator | 2025-11-08 13:58:56 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:58:59.663668 | orchestrator | 2025-11-08 13:58:59 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:58:59.663853 | orchestrator | 2025-11-08 13:58:59 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:58:59.664244 | orchestrator | 2025-11-08 13:58:59 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:58:59.664721 | orchestrator | 2025-11-08 13:58:59 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:58:59.665270 | orchestrator | 2025-11-08 13:58:59 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:58:59.665290 | orchestrator | 2025-11-08 13:58:59 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:02.692854 | orchestrator | 2025-11-08 13:59:02 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:02.693714 | orchestrator | 2025-11-08 13:59:02 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:02.694718 | orchestrator | 2025-11-08 13:59:02 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:02.696762 | orchestrator | 2025-11-08 13:59:02 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:02.698481 | orchestrator | 2025-11-08 13:59:02 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:59:02.698589 | orchestrator | 2025-11-08 13:59:02 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:05.723451 | orchestrator | 2025-11-08 13:59:05 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:05.725617 | orchestrator | 2025-11-08 13:59:05 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:05.726107 | orchestrator | 2025-11-08 13:59:05 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:05.726735 | orchestrator | 2025-11-08 13:59:05 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:05.727520 | orchestrator | 2025-11-08 13:59:05 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:59:05.727549 | orchestrator | 2025-11-08 13:59:05 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:08.745341 | orchestrator | 2025-11-08 13:59:08 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:08.745415 | orchestrator | 2025-11-08 13:59:08 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:08.746056 | orchestrator | 2025-11-08 13:59:08 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:08.746438 | orchestrator | 2025-11-08 13:59:08 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:08.747031 | orchestrator | 2025-11-08 13:59:08 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:59:08.747082 | orchestrator | 2025-11-08 13:59:08 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:11.763290 | orchestrator | 2025-11-08 13:59:11 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:11.763416 | orchestrator | 2025-11-08 13:59:11 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:11.763931 | orchestrator | 2025-11-08 13:59:11 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:11.764384 | orchestrator | 2025-11-08 13:59:11 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:11.764964 | orchestrator | 2025-11-08 13:59:11 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:59:11.765002 | orchestrator | 2025-11-08 13:59:11 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:14.848379 | orchestrator | 2025-11-08 13:59:14 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:14.848519 | orchestrator | 2025-11-08 13:59:14 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:14.849206 | orchestrator | 2025-11-08 13:59:14 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:14.850362 | orchestrator | 2025-11-08 13:59:14 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:14.850387 | orchestrator | 2025-11-08 13:59:14 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:59:14.850399 | orchestrator | 2025-11-08 13:59:14 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:17.876060 | orchestrator | 2025-11-08 13:59:17 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:17.876129 | orchestrator | 2025-11-08 13:59:17 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:17.876134 | orchestrator | 2025-11-08 13:59:17 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:17.876158 | orchestrator | 2025-11-08 13:59:17 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:17.876162 | orchestrator | 2025-11-08 13:59:17 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:59:17.876175 | orchestrator | 2025-11-08 13:59:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:20.900510 | orchestrator | 2025-11-08 13:59:20 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:20.900753 | orchestrator | 2025-11-08 13:59:20 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:20.902162 | orchestrator | 2025-11-08 13:59:20 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:20.902194 | orchestrator | 2025-11-08 13:59:20 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:20.902208 | orchestrator | 2025-11-08 13:59:20 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:59:20.902223 | orchestrator | 2025-11-08 13:59:20 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:23.924982 | orchestrator | 2025-11-08 13:59:23 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:23.925124 | orchestrator | 2025-11-08 13:59:23 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:23.925813 | orchestrator | 2025-11-08 13:59:23 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:23.926408 | orchestrator | 2025-11-08 13:59:23 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:23.927092 | orchestrator | 2025-11-08 13:59:23 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state STARTED 2025-11-08 13:59:23.927157 | orchestrator | 2025-11-08 13:59:23 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:26.954293 | orchestrator | 2025-11-08 13:59:26 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:26.954393 | orchestrator | 2025-11-08 13:59:26 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:26.954684 | orchestrator | 2025-11-08 13:59:26 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:26.956389 | orchestrator | 2025-11-08 13:59:26 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:26.956965 | orchestrator | 2025-11-08 13:59:26 | INFO  | Task 087f68cd-e2c0-4c51-a29e-5932856fefbf is in state SUCCESS 2025-11-08 13:59:26.956992 | orchestrator | 2025-11-08 13:59:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:29.980936 | orchestrator | 2025-11-08 13:59:29 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:29.981142 | orchestrator | 2025-11-08 13:59:29 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:29.981727 | orchestrator | 2025-11-08 13:59:29 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:29.982298 | orchestrator | 2025-11-08 13:59:29 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:29.982456 | orchestrator | 2025-11-08 13:59:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:33.005223 | orchestrator | 2025-11-08 13:59:33 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:33.005303 | orchestrator | 2025-11-08 13:59:33 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:33.007889 | orchestrator | 2025-11-08 13:59:33 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:33.008205 | orchestrator | 2025-11-08 13:59:33 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:33.008325 | orchestrator | 2025-11-08 13:59:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:36.031247 | orchestrator | 2025-11-08 13:59:36 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:36.031508 | orchestrator | 2025-11-08 13:59:36 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:36.032284 | orchestrator | 2025-11-08 13:59:36 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:36.032785 | orchestrator | 2025-11-08 13:59:36 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:36.032939 | orchestrator | 2025-11-08 13:59:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:39.058355 | orchestrator | 2025-11-08 13:59:39 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:39.058528 | orchestrator | 2025-11-08 13:59:39 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:39.058588 | orchestrator | 2025-11-08 13:59:39 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:39.059312 | orchestrator | 2025-11-08 13:59:39 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:39.059347 | orchestrator | 2025-11-08 13:59:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:42.083992 | orchestrator | 2025-11-08 13:59:42 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:42.084582 | orchestrator | 2025-11-08 13:59:42 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:42.084629 | orchestrator | 2025-11-08 13:59:42 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:42.085335 | orchestrator | 2025-11-08 13:59:42 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:42.085373 | orchestrator | 2025-11-08 13:59:42 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:45.111009 | orchestrator | 2025-11-08 13:59:45 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:45.111474 | orchestrator | 2025-11-08 13:59:45 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:45.112018 | orchestrator | 2025-11-08 13:59:45 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:45.112544 | orchestrator | 2025-11-08 13:59:45 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:45.112563 | orchestrator | 2025-11-08 13:59:45 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:48.134134 | orchestrator | 2025-11-08 13:59:48 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:48.134687 | orchestrator | 2025-11-08 13:59:48 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:48.135104 | orchestrator | 2025-11-08 13:59:48 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:48.135647 | orchestrator | 2025-11-08 13:59:48 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:48.135689 | orchestrator | 2025-11-08 13:59:48 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:51.161458 | orchestrator | 2025-11-08 13:59:51 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:51.162225 | orchestrator | 2025-11-08 13:59:51 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:51.162819 | orchestrator | 2025-11-08 13:59:51 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:51.163472 | orchestrator | 2025-11-08 13:59:51 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:51.163482 | orchestrator | 2025-11-08 13:59:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:54.193237 | orchestrator | 2025-11-08 13:59:54 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:54.193539 | orchestrator | 2025-11-08 13:59:54 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:54.194421 | orchestrator | 2025-11-08 13:59:54 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:54.195613 | orchestrator | 2025-11-08 13:59:54 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:54.195649 | orchestrator | 2025-11-08 13:59:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 13:59:57.252177 | orchestrator | 2025-11-08 13:59:57 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 13:59:57.252240 | orchestrator | 2025-11-08 13:59:57 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 13:59:57.252246 | orchestrator | 2025-11-08 13:59:57 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 13:59:57.252250 | orchestrator | 2025-11-08 13:59:57 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 13:59:57.252254 | orchestrator | 2025-11-08 13:59:57 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:00.279225 | orchestrator | 2025-11-08 14:00:00 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 14:00:00.279561 | orchestrator | 2025-11-08 14:00:00 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:00.282303 | orchestrator | 2025-11-08 14:00:00 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:00.283060 | orchestrator | 2025-11-08 14:00:00 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:00.283137 | orchestrator | 2025-11-08 14:00:00 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:03.414167 | orchestrator | 2025-11-08 14:00:03 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 14:00:03.414244 | orchestrator | 2025-11-08 14:00:03 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:03.414259 | orchestrator | 2025-11-08 14:00:03 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:03.414270 | orchestrator | 2025-11-08 14:00:03 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:03.414281 | orchestrator | 2025-11-08 14:00:03 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:06.396358 | orchestrator | 2025-11-08 14:00:06 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 14:00:06.396491 | orchestrator | 2025-11-08 14:00:06 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:06.397284 | orchestrator | 2025-11-08 14:00:06 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:06.398099 | orchestrator | 2025-11-08 14:00:06 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:06.398181 | orchestrator | 2025-11-08 14:00:06 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:09.426327 | orchestrator | 2025-11-08 14:00:09 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state STARTED 2025-11-08 14:00:09.426463 | orchestrator | 2025-11-08 14:00:09 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:09.426775 | orchestrator | 2025-11-08 14:00:09 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:09.428098 | orchestrator | 2025-11-08 14:00:09 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:09.428117 | orchestrator | 2025-11-08 14:00:09 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:12.451978 | orchestrator | 2025-11-08 14:00:12 | INFO  | Task cd72c351-5ca4-4384-9f1c-cab6a4db6339 is in state SUCCESS 2025-11-08 14:00:12.452111 | orchestrator | 2025-11-08 14:00:12.452131 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2025-11-08 14:00:12.452144 | orchestrator | 2.16.14 2025-11-08 14:00:12.452157 | orchestrator | 2025-11-08 14:00:12.452168 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-11-08 14:00:12.452181 | orchestrator | 2025-11-08 14:00:12.452191 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-11-08 14:00:12.452202 | orchestrator | Saturday 08 November 2025 13:58:02 +0000 (0:00:00.263) 0:00:00.263 ***** 2025-11-08 14:00:12.452214 | orchestrator | changed: [testbed-manager] 2025-11-08 14:00:12.452227 | orchestrator | 2025-11-08 14:00:12.452238 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-11-08 14:00:12.452249 | orchestrator | Saturday 08 November 2025 13:58:04 +0000 (0:00:02.338) 0:00:02.601 ***** 2025-11-08 14:00:12.452260 | orchestrator | changed: [testbed-manager] 2025-11-08 14:00:12.452272 | orchestrator | 2025-11-08 14:00:12.452282 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-11-08 14:00:12.452293 | orchestrator | Saturday 08 November 2025 13:58:05 +0000 (0:00:01.026) 0:00:03.627 ***** 2025-11-08 14:00:12.452305 | orchestrator | changed: [testbed-manager] 2025-11-08 14:00:12.452315 | orchestrator | 2025-11-08 14:00:12.452326 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-11-08 14:00:12.452338 | orchestrator | Saturday 08 November 2025 13:58:06 +0000 (0:00:00.881) 0:00:04.509 ***** 2025-11-08 14:00:12.452350 | orchestrator | changed: [testbed-manager] 2025-11-08 14:00:12.452361 | orchestrator | 2025-11-08 14:00:12.452372 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-11-08 14:00:12.452384 | orchestrator | Saturday 08 November 2025 13:58:07 +0000 (0:00:00.936) 0:00:05.445 ***** 2025-11-08 14:00:12.452395 | orchestrator | changed: [testbed-manager] 2025-11-08 14:00:12.452405 | orchestrator | 2025-11-08 14:00:12.452417 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-11-08 14:00:12.452429 | orchestrator | Saturday 08 November 2025 13:58:08 +0000 (0:00:01.010) 0:00:06.456 ***** 2025-11-08 14:00:12.452441 | orchestrator | changed: [testbed-manager] 2025-11-08 14:00:12.452453 | orchestrator | 2025-11-08 14:00:12.452464 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-11-08 14:00:12.452476 | orchestrator | Saturday 08 November 2025 13:58:09 +0000 (0:00:00.865) 0:00:07.321 ***** 2025-11-08 14:00:12.452487 | orchestrator | changed: [testbed-manager] 2025-11-08 14:00:12.452499 | orchestrator | 2025-11-08 14:00:12.452510 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-11-08 14:00:12.452522 | orchestrator | Saturday 08 November 2025 13:58:10 +0000 (0:00:01.158) 0:00:08.479 ***** 2025-11-08 14:00:12.452533 | orchestrator | changed: [testbed-manager] 2025-11-08 14:00:12.452545 | orchestrator | 2025-11-08 14:00:12.452556 | orchestrator | TASK [Create admin user] ******************************************************* 2025-11-08 14:00:12.452607 | orchestrator | Saturday 08 November 2025 13:58:11 +0000 (0:00:00.988) 0:00:09.468 ***** 2025-11-08 14:00:12.452639 | orchestrator | changed: [testbed-manager] 2025-11-08 14:00:12.452651 | orchestrator | 2025-11-08 14:00:12.452662 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-11-08 14:00:12.452674 | orchestrator | Saturday 08 November 2025 13:59:00 +0000 (0:00:49.193) 0:00:58.662 ***** 2025-11-08 14:00:12.452684 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:00:12.452696 | orchestrator | 2025-11-08 14:00:12.452708 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-08 14:00:12.452720 | orchestrator | 2025-11-08 14:00:12.452731 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-08 14:00:12.452741 | orchestrator | Saturday 08 November 2025 13:59:00 +0000 (0:00:00.144) 0:00:58.806 ***** 2025-11-08 14:00:12.452749 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:00:12.452756 | orchestrator | 2025-11-08 14:00:12.452763 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-08 14:00:12.452769 | orchestrator | 2025-11-08 14:00:12.452776 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-08 14:00:12.452783 | orchestrator | Saturday 08 November 2025 13:59:12 +0000 (0:00:11.523) 0:01:10.330 ***** 2025-11-08 14:00:12.452789 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:00:12.452796 | orchestrator | 2025-11-08 14:00:12.452803 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-08 14:00:12.452836 | orchestrator | 2025-11-08 14:00:12.452847 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-08 14:00:12.452858 | orchestrator | Saturday 08 November 2025 13:59:13 +0000 (0:00:01.247) 0:01:11.577 ***** 2025-11-08 14:00:12.452869 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:00:12.452879 | orchestrator | 2025-11-08 14:00:12.452890 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:00:12.452902 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-08 14:00:12.452912 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:00:12.452923 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:00:12.452932 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:00:12.452942 | orchestrator | 2025-11-08 14:00:12.452951 | orchestrator | 2025-11-08 14:00:12.452960 | orchestrator | 2025-11-08 14:00:12.452971 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:00:12.452981 | orchestrator | Saturday 08 November 2025 13:59:24 +0000 (0:00:11.205) 0:01:22.783 ***** 2025-11-08 14:00:12.453018 | orchestrator | =============================================================================== 2025-11-08 14:00:12.453030 | orchestrator | Create admin user ------------------------------------------------------ 49.19s 2025-11-08 14:00:12.453040 | orchestrator | Restart ceph manager service ------------------------------------------- 23.98s 2025-11-08 14:00:12.453049 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.34s 2025-11-08 14:00:12.453059 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.16s 2025-11-08 14:00:12.453068 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.03s 2025-11-08 14:00:12.453078 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.01s 2025-11-08 14:00:12.453087 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.99s 2025-11-08 14:00:12.453098 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.94s 2025-11-08 14:00:12.453107 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.88s 2025-11-08 14:00:12.453133 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.87s 2025-11-08 14:00:12.453144 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-11-08 14:00:12.453154 | orchestrator | 2025-11-08 14:00:12.453858 | orchestrator | 2025-11-08 14:00:12.453914 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:00:12.453924 | orchestrator | 2025-11-08 14:00:12.453931 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:00:12.453938 | orchestrator | Saturday 08 November 2025 13:58:08 +0000 (0:00:00.317) 0:00:00.317 ***** 2025-11-08 14:00:12.453946 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:00:12.453955 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:00:12.453962 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:00:12.453969 | orchestrator | 2025-11-08 14:00:12.453976 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:00:12.453983 | orchestrator | Saturday 08 November 2025 13:58:08 +0000 (0:00:00.315) 0:00:00.632 ***** 2025-11-08 14:00:12.453989 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-11-08 14:00:12.453997 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-11-08 14:00:12.454004 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-11-08 14:00:12.454010 | orchestrator | 2025-11-08 14:00:12.454065 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-11-08 14:00:12.454078 | orchestrator | 2025-11-08 14:00:12.454084 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-08 14:00:12.454090 | orchestrator | Saturday 08 November 2025 13:58:09 +0000 (0:00:00.659) 0:00:01.292 ***** 2025-11-08 14:00:12.454096 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:00:12.454105 | orchestrator | 2025-11-08 14:00:12.454112 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-11-08 14:00:12.454119 | orchestrator | Saturday 08 November 2025 13:58:10 +0000 (0:00:00.604) 0:00:01.897 ***** 2025-11-08 14:00:12.454126 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-11-08 14:00:12.454132 | orchestrator | 2025-11-08 14:00:12.454138 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-11-08 14:00:12.454145 | orchestrator | Saturday 08 November 2025 13:58:14 +0000 (0:00:04.340) 0:00:06.238 ***** 2025-11-08 14:00:12.454151 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-11-08 14:00:12.454158 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-11-08 14:00:12.454164 | orchestrator | 2025-11-08 14:00:12.454170 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-11-08 14:00:12.454177 | orchestrator | Saturday 08 November 2025 13:58:21 +0000 (0:00:06.581) 0:00:12.819 ***** 2025-11-08 14:00:12.454184 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-11-08 14:00:12.454187 | orchestrator | 2025-11-08 14:00:12.454192 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-11-08 14:00:12.454195 | orchestrator | Saturday 08 November 2025 13:58:24 +0000 (0:00:03.780) 0:00:16.600 ***** 2025-11-08 14:00:12.454200 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-08 14:00:12.454204 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-11-08 14:00:12.454208 | orchestrator | 2025-11-08 14:00:12.454212 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-11-08 14:00:12.454216 | orchestrator | Saturday 08 November 2025 13:58:29 +0000 (0:00:04.610) 0:00:21.210 ***** 2025-11-08 14:00:12.454220 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-08 14:00:12.454224 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-11-08 14:00:12.454228 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-11-08 14:00:12.454258 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-11-08 14:00:12.454262 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-11-08 14:00:12.454273 | orchestrator | 2025-11-08 14:00:12.454277 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-11-08 14:00:12.454281 | orchestrator | Saturday 08 November 2025 13:58:45 +0000 (0:00:15.875) 0:00:37.088 ***** 2025-11-08 14:00:12.454284 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-11-08 14:00:12.454288 | orchestrator | 2025-11-08 14:00:12.454292 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-11-08 14:00:12.454296 | orchestrator | Saturday 08 November 2025 13:58:49 +0000 (0:00:04.400) 0:00:41.489 ***** 2025-11-08 14:00:12.454390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.454417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.454427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.454442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454480 | orchestrator | 2025-11-08 14:00:12.454487 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-11-08 14:00:12.454493 | orchestrator | Saturday 08 November 2025 13:58:52 +0000 (0:00:03.075) 0:00:44.564 ***** 2025-11-08 14:00:12.454501 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-11-08 14:00:12.454507 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-11-08 14:00:12.454514 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-11-08 14:00:12.454520 | orchestrator | 2025-11-08 14:00:12.454527 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-11-08 14:00:12.454538 | orchestrator | Saturday 08 November 2025 13:58:54 +0000 (0:00:01.341) 0:00:45.906 ***** 2025-11-08 14:00:12.454545 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:00:12.454551 | orchestrator | 2025-11-08 14:00:12.454558 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-11-08 14:00:12.454566 | orchestrator | Saturday 08 November 2025 13:58:54 +0000 (0:00:00.282) 0:00:46.188 ***** 2025-11-08 14:00:12.454572 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:00:12.454580 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:00:12.454586 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:00:12.454591 | orchestrator | 2025-11-08 14:00:12.454597 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-08 14:00:12.454605 | orchestrator | Saturday 08 November 2025 13:58:54 +0000 (0:00:00.478) 0:00:46.667 ***** 2025-11-08 14:00:12.454611 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:00:12.454618 | orchestrator | 2025-11-08 14:00:12.454624 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-11-08 14:00:12.454630 | orchestrator | Saturday 08 November 2025 13:58:55 +0000 (0:00:00.492) 0:00:47.160 ***** 2025-11-08 14:00:12.454638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.454652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.454665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.454679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.454736 | orchestrator | 2025-11-08 14:00:12.454745 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-11-08 14:00:12.454758 | orchestrator | Saturday 08 November 2025 13:58:59 +0000 (0:00:03.912) 0:00:51.073 ***** 2025-11-08 14:00:12.454771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 14:00:12.454779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.454787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.454795 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:00:12.454807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 14:00:12.454837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.454850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.454862 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:00:12.454869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 14:00:12.454876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.454882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.454889 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:00:12.454896 | orchestrator | 2025-11-08 14:00:12.454903 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-11-08 14:00:12.454909 | orchestrator | Saturday 08 November 2025 13:59:01 +0000 (0:00:02.601) 0:00:53.674 ***** 2025-11-08 14:00:12.454920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 14:00:12.454936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.454943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.454949 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:00:12.454956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 14:00:12.454963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.454970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.454977 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:00:12.454989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 14:00:12.455009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.455017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.455023 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:00:12.455030 | orchestrator | 2025-11-08 14:00:12.455036 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-11-08 14:00:12.455043 | orchestrator | Saturday 08 November 2025 13:59:02 +0000 (0:00:00.937) 0:00:54.611 ***** 2025-11-08 14:00:12.455049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.455060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.455072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.455084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455137 | orchestrator | 2025-11-08 14:00:12.455143 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-11-08 14:00:12.455149 | orchestrator | Saturday 08 November 2025 13:59:06 +0000 (0:00:03.383) 0:00:57.995 ***** 2025-11-08 14:00:12.455155 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:00:12.455162 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:00:12.455169 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:00:12.455175 | orchestrator | 2025-11-08 14:00:12.455182 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-11-08 14:00:12.455188 | orchestrator | Saturday 08 November 2025 13:59:09 +0000 (0:00:02.912) 0:01:00.908 ***** 2025-11-08 14:00:12.455194 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 14:00:12.455200 | orchestrator | 2025-11-08 14:00:12.455206 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-11-08 14:00:12.455215 | orchestrator | Saturday 08 November 2025 13:59:10 +0000 (0:00:01.266) 0:01:02.174 ***** 2025-11-08 14:00:12.455221 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:00:12.455227 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:00:12.455234 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:00:12.455240 | orchestrator | 2025-11-08 14:00:12.455246 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-11-08 14:00:12.455252 | orchestrator | Saturday 08 November 2025 13:59:11 +0000 (0:00:01.175) 0:01:03.350 ***** 2025-11-08 14:00:12.455259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.455266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.455287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.455294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455344 | orchestrator | 2025-11-08 14:00:12.455351 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-11-08 14:00:12.455400 | orchestrator | Saturday 08 November 2025 13:59:21 +0000 (0:00:09.408) 0:01:12.758 ***** 2025-11-08 14:00:12.455409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 14:00:12.455419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.455426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.455433 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:00:12.455439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 14:00:12.455451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.455464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.455471 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:00:12.455482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-08 14:00:12.455489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.455495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:00:12.455502 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:00:12.455508 | orchestrator | 2025-11-08 14:00:12.455515 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-11-08 14:00:12.455522 | orchestrator | Saturday 08 November 2025 13:59:21 +0000 (0:00:00.760) 0:01:13.519 ***** 2025-11-08 14:00:12.455534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.455548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.455559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-08 14:00:12.455566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:00:12.455616 | orchestrator | 2025-11-08 14:00:12.455622 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-08 14:00:12.455629 | orchestrator | Saturday 08 November 2025 13:59:25 +0000 (0:00:03.534) 0:01:17.053 ***** 2025-11-08 14:00:12.455635 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:00:12.455642 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:00:12.455652 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:00:12.455659 | orchestrator | 2025-11-08 14:00:12.455665 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-11-08 14:00:12.455671 | orchestrator | Saturday 08 November 2025 13:59:26 +0000 (0:00:00.747) 0:01:17.801 ***** 2025-11-08 14:00:12.455676 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:00:12.455682 | orchestrator | 2025-11-08 14:00:12.455688 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-11-08 14:00:12.455694 | orchestrator | Saturday 08 November 2025 13:59:28 +0000 (0:00:02.025) 0:01:19.826 ***** 2025-11-08 14:00:12.455700 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:00:12.455707 | orchestrator | 2025-11-08 14:00:12.455713 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-11-08 14:00:12.455720 | orchestrator | Saturday 08 November 2025 13:59:30 +0000 (0:00:02.281) 0:01:22.108 ***** 2025-11-08 14:00:12.455727 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:00:12.455742 | orchestrator | 2025-11-08 14:00:12.455749 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-08 14:00:12.455756 | orchestrator | Saturday 08 November 2025 13:59:42 +0000 (0:00:12.371) 0:01:34.480 ***** 2025-11-08 14:00:12.455762 | orchestrator | 2025-11-08 14:00:12.455768 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-08 14:00:12.455775 | orchestrator | Saturday 08 November 2025 13:59:42 +0000 (0:00:00.080) 0:01:34.560 ***** 2025-11-08 14:00:12.455781 | orchestrator | 2025-11-08 14:00:12.455788 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-08 14:00:12.455795 | orchestrator | Saturday 08 November 2025 13:59:42 +0000 (0:00:00.058) 0:01:34.619 ***** 2025-11-08 14:00:12.455802 | orchestrator | 2025-11-08 14:00:12.455848 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-11-08 14:00:12.455859 | orchestrator | Saturday 08 November 2025 13:59:42 +0000 (0:00:00.061) 0:01:34.681 ***** 2025-11-08 14:00:12.455866 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:00:12.455872 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:00:12.455879 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:00:12.455885 | orchestrator | 2025-11-08 14:00:12.455891 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-11-08 14:00:12.455897 | orchestrator | Saturday 08 November 2025 13:59:52 +0000 (0:00:09.447) 0:01:44.128 ***** 2025-11-08 14:00:12.455904 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:00:12.455910 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:00:12.455917 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:00:12.455923 | orchestrator | 2025-11-08 14:00:12.455930 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-11-08 14:00:12.455936 | orchestrator | Saturday 08 November 2025 14:00:01 +0000 (0:00:09.223) 0:01:53.352 ***** 2025-11-08 14:00:12.455943 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:00:12.455950 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:00:12.455956 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:00:12.455962 | orchestrator | 2025-11-08 14:00:12.455968 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:00:12.455976 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 14:00:12.455984 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 14:00:12.455990 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 14:00:12.455997 | orchestrator | 2025-11-08 14:00:12.456003 | orchestrator | 2025-11-08 14:00:12.456009 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:00:12.456015 | orchestrator | Saturday 08 November 2025 14:00:09 +0000 (0:00:08.302) 0:02:01.654 ***** 2025-11-08 14:00:12.456021 | orchestrator | =============================================================================== 2025-11-08 14:00:12.456028 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.88s 2025-11-08 14:00:12.456040 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.37s 2025-11-08 14:00:12.456047 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.45s 2025-11-08 14:00:12.456053 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.41s 2025-11-08 14:00:12.456059 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.22s 2025-11-08 14:00:12.456066 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.30s 2025-11-08 14:00:12.456072 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.58s 2025-11-08 14:00:12.456078 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.61s 2025-11-08 14:00:12.456090 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.40s 2025-11-08 14:00:12.456097 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.34s 2025-11-08 14:00:12.456104 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.91s 2025-11-08 14:00:12.456110 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.78s 2025-11-08 14:00:12.456116 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.53s 2025-11-08 14:00:12.456122 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.38s 2025-11-08 14:00:12.456128 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.08s 2025-11-08 14:00:12.456135 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.91s 2025-11-08 14:00:12.456146 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.60s 2025-11-08 14:00:12.456153 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.28s 2025-11-08 14:00:12.456159 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.03s 2025-11-08 14:00:12.456166 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.34s 2025-11-08 14:00:12.456365 | orchestrator | 2025-11-08 14:00:12 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:12.456394 | orchestrator | 2025-11-08 14:00:12 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:12.457109 | orchestrator | 2025-11-08 14:00:12 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:12.458866 | orchestrator | 2025-11-08 14:00:12 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:12.458905 | orchestrator | 2025-11-08 14:00:12 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:15.508654 | orchestrator | 2025-11-08 14:00:15 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:15.509569 | orchestrator | 2025-11-08 14:00:15 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:15.513335 | orchestrator | 2025-11-08 14:00:15 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:15.514475 | orchestrator | 2025-11-08 14:00:15 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:15.514519 | orchestrator | 2025-11-08 14:00:15 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:18.536486 | orchestrator | 2025-11-08 14:00:18 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:18.537264 | orchestrator | 2025-11-08 14:00:18 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:18.537411 | orchestrator | 2025-11-08 14:00:18 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:18.537632 | orchestrator | 2025-11-08 14:00:18 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:18.537754 | orchestrator | 2025-11-08 14:00:18 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:21.559839 | orchestrator | 2025-11-08 14:00:21 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:21.559983 | orchestrator | 2025-11-08 14:00:21 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:21.560557 | orchestrator | 2025-11-08 14:00:21 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:21.561115 | orchestrator | 2025-11-08 14:00:21 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:21.561214 | orchestrator | 2025-11-08 14:00:21 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:24.584623 | orchestrator | 2025-11-08 14:00:24 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:24.584708 | orchestrator | 2025-11-08 14:00:24 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:24.585109 | orchestrator | 2025-11-08 14:00:24 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:24.587105 | orchestrator | 2025-11-08 14:00:24 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:24.587170 | orchestrator | 2025-11-08 14:00:24 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:27.620297 | orchestrator | 2025-11-08 14:00:27 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:27.620420 | orchestrator | 2025-11-08 14:00:27 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:27.620931 | orchestrator | 2025-11-08 14:00:27 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:27.622910 | orchestrator | 2025-11-08 14:00:27 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:27.623002 | orchestrator | 2025-11-08 14:00:27 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:30.653561 | orchestrator | 2025-11-08 14:00:30 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:30.654983 | orchestrator | 2025-11-08 14:00:30 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:30.657461 | orchestrator | 2025-11-08 14:00:30 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:30.659714 | orchestrator | 2025-11-08 14:00:30 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:30.659748 | orchestrator | 2025-11-08 14:00:30 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:33.690948 | orchestrator | 2025-11-08 14:00:33 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:33.691389 | orchestrator | 2025-11-08 14:00:33 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:33.692091 | orchestrator | 2025-11-08 14:00:33 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:33.693334 | orchestrator | 2025-11-08 14:00:33 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:33.693370 | orchestrator | 2025-11-08 14:00:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:36.713461 | orchestrator | 2025-11-08 14:00:36 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:36.713752 | orchestrator | 2025-11-08 14:00:36 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:36.714230 | orchestrator | 2025-11-08 14:00:36 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:36.715025 | orchestrator | 2025-11-08 14:00:36 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:36.715069 | orchestrator | 2025-11-08 14:00:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:39.741612 | orchestrator | 2025-11-08 14:00:39 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:39.742103 | orchestrator | 2025-11-08 14:00:39 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:39.743902 | orchestrator | 2025-11-08 14:00:39 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:39.744360 | orchestrator | 2025-11-08 14:00:39 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:39.745142 | orchestrator | 2025-11-08 14:00:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:42.773194 | orchestrator | 2025-11-08 14:00:42 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:42.773345 | orchestrator | 2025-11-08 14:00:42 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:42.774895 | orchestrator | 2025-11-08 14:00:42 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:42.775488 | orchestrator | 2025-11-08 14:00:42 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:42.775668 | orchestrator | 2025-11-08 14:00:42 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:45.808959 | orchestrator | 2025-11-08 14:00:45 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:45.809612 | orchestrator | 2025-11-08 14:00:45 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:45.810609 | orchestrator | 2025-11-08 14:00:45 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:45.811666 | orchestrator | 2025-11-08 14:00:45 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:45.812097 | orchestrator | 2025-11-08 14:00:45 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:48.866884 | orchestrator | 2025-11-08 14:00:48 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:48.868990 | orchestrator | 2025-11-08 14:00:48 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:48.870459 | orchestrator | 2025-11-08 14:00:48 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:48.873499 | orchestrator | 2025-11-08 14:00:48 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:48.873574 | orchestrator | 2025-11-08 14:00:48 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:51.906566 | orchestrator | 2025-11-08 14:00:51 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:51.906696 | orchestrator | 2025-11-08 14:00:51 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:51.907878 | orchestrator | 2025-11-08 14:00:51 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:51.909124 | orchestrator | 2025-11-08 14:00:51 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:51.909168 | orchestrator | 2025-11-08 14:00:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:54.954585 | orchestrator | 2025-11-08 14:00:54 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:54.955625 | orchestrator | 2025-11-08 14:00:54 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:54.957069 | orchestrator | 2025-11-08 14:00:54 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:54.958419 | orchestrator | 2025-11-08 14:00:54 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:54.958453 | orchestrator | 2025-11-08 14:00:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:00:57.997790 | orchestrator | 2025-11-08 14:00:57 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:00:58.001180 | orchestrator | 2025-11-08 14:00:58 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:00:58.002399 | orchestrator | 2025-11-08 14:00:58 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:00:58.004664 | orchestrator | 2025-11-08 14:00:58 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:00:58.004879 | orchestrator | 2025-11-08 14:00:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:01.055159 | orchestrator | 2025-11-08 14:01:01 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:01.055250 | orchestrator | 2025-11-08 14:01:01 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:01.055268 | orchestrator | 2025-11-08 14:01:01 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:01:01.055282 | orchestrator | 2025-11-08 14:01:01 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:01:01.055296 | orchestrator | 2025-11-08 14:01:01 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:04.089462 | orchestrator | 2025-11-08 14:01:04 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:04.090891 | orchestrator | 2025-11-08 14:01:04 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:04.092146 | orchestrator | 2025-11-08 14:01:04 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:01:04.094207 | orchestrator | 2025-11-08 14:01:04 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:01:04.094382 | orchestrator | 2025-11-08 14:01:04 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:07.141116 | orchestrator | 2025-11-08 14:01:07 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:07.143690 | orchestrator | 2025-11-08 14:01:07 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:07.145185 | orchestrator | 2025-11-08 14:01:07 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:01:07.146981 | orchestrator | 2025-11-08 14:01:07 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:01:07.147020 | orchestrator | 2025-11-08 14:01:07 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:10.191840 | orchestrator | 2025-11-08 14:01:10 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:10.193977 | orchestrator | 2025-11-08 14:01:10 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:10.195257 | orchestrator | 2025-11-08 14:01:10 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:01:10.196984 | orchestrator | 2025-11-08 14:01:10 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:01:10.197171 | orchestrator | 2025-11-08 14:01:10 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:13.234480 | orchestrator | 2025-11-08 14:01:13 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:13.236009 | orchestrator | 2025-11-08 14:01:13 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:13.237354 | orchestrator | 2025-11-08 14:01:13 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:01:13.239070 | orchestrator | 2025-11-08 14:01:13 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:01:13.239167 | orchestrator | 2025-11-08 14:01:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:16.278437 | orchestrator | 2025-11-08 14:01:16 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:16.279229 | orchestrator | 2025-11-08 14:01:16 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:16.280270 | orchestrator | 2025-11-08 14:01:16 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:01:16.282549 | orchestrator | 2025-11-08 14:01:16 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state STARTED 2025-11-08 14:01:16.282588 | orchestrator | 2025-11-08 14:01:16 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:19.357428 | orchestrator | 2025-11-08 14:01:19 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:19.357720 | orchestrator | 2025-11-08 14:01:19 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:19.358467 | orchestrator | 2025-11-08 14:01:19 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:01:19.360647 | orchestrator | 2025-11-08 14:01:19 | INFO  | Task 220c2dfa-c5b9-47e8-8eba-185aec635564 is in state SUCCESS 2025-11-08 14:01:19.363952 | orchestrator | 2025-11-08 14:01:19.364028 | orchestrator | 2025-11-08 14:01:19.364044 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:01:19.364057 | orchestrator | 2025-11-08 14:01:19.364069 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:01:19.364268 | orchestrator | Saturday 08 November 2025 13:58:10 +0000 (0:00:00.294) 0:00:00.294 ***** 2025-11-08 14:01:19.364300 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:01:19.364489 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:01:19.364502 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:01:19.364513 | orchestrator | 2025-11-08 14:01:19.364524 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:01:19.364536 | orchestrator | Saturday 08 November 2025 13:58:11 +0000 (0:00:00.530) 0:00:00.825 ***** 2025-11-08 14:01:19.364548 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-11-08 14:01:19.364559 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-11-08 14:01:19.364570 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-11-08 14:01:19.364581 | orchestrator | 2025-11-08 14:01:19.364592 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-11-08 14:01:19.364631 | orchestrator | 2025-11-08 14:01:19.364644 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-08 14:01:19.364668 | orchestrator | Saturday 08 November 2025 13:58:12 +0000 (0:00:00.942) 0:00:01.768 ***** 2025-11-08 14:01:19.364771 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:01:19.364833 | orchestrator | 2025-11-08 14:01:19.364858 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-11-08 14:01:19.364911 | orchestrator | Saturday 08 November 2025 13:58:12 +0000 (0:00:00.639) 0:00:02.408 ***** 2025-11-08 14:01:19.364933 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-11-08 14:01:19.364945 | orchestrator | 2025-11-08 14:01:19.364979 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-11-08 14:01:19.364992 | orchestrator | Saturday 08 November 2025 13:58:16 +0000 (0:00:04.153) 0:00:06.561 ***** 2025-11-08 14:01:19.365003 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-11-08 14:01:19.365164 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-11-08 14:01:19.365226 | orchestrator | 2025-11-08 14:01:19.365266 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-11-08 14:01:19.365278 | orchestrator | Saturday 08 November 2025 13:58:23 +0000 (0:00:06.605) 0:00:13.167 ***** 2025-11-08 14:01:19.365334 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-08 14:01:19.365347 | orchestrator | 2025-11-08 14:01:19.365358 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-11-08 14:01:19.365368 | orchestrator | Saturday 08 November 2025 13:58:26 +0000 (0:00:03.321) 0:00:16.488 ***** 2025-11-08 14:01:19.365380 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-08 14:01:19.365390 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-11-08 14:01:19.365401 | orchestrator | 2025-11-08 14:01:19.365412 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-11-08 14:01:19.365423 | orchestrator | Saturday 08 November 2025 13:58:30 +0000 (0:00:03.904) 0:00:20.393 ***** 2025-11-08 14:01:19.365433 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-08 14:01:19.365444 | orchestrator | 2025-11-08 14:01:19.365455 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-11-08 14:01:19.365465 | orchestrator | Saturday 08 November 2025 13:58:34 +0000 (0:00:03.654) 0:00:24.047 ***** 2025-11-08 14:01:19.365476 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-11-08 14:01:19.365487 | orchestrator | 2025-11-08 14:01:19.365498 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-11-08 14:01:19.365509 | orchestrator | Saturday 08 November 2025 13:58:38 +0000 (0:00:03.619) 0:00:27.667 ***** 2025-11-08 14:01:19.365538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.365579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.365592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.365613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.365860 | orchestrator | 2025-11-08 14:01:19.365871 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-11-08 14:01:19.365882 | orchestrator | Saturday 08 November 2025 13:58:41 +0000 (0:00:03.089) 0:00:30.757 ***** 2025-11-08 14:01:19.365893 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:19.365904 | orchestrator | 2025-11-08 14:01:19.365915 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-11-08 14:01:19.365925 | orchestrator | Saturday 08 November 2025 13:58:41 +0000 (0:00:00.106) 0:00:30.863 ***** 2025-11-08 14:01:19.365936 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:19.365947 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:19.365958 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:19.365969 | orchestrator | 2025-11-08 14:01:19.365979 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-08 14:01:19.365990 | orchestrator | Saturday 08 November 2025 13:58:41 +0000 (0:00:00.249) 0:00:31.113 ***** 2025-11-08 14:01:19.366001 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:01:19.366012 | orchestrator | 2025-11-08 14:01:19.366094 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-11-08 14:01:19.366106 | orchestrator | Saturday 08 November 2025 13:58:42 +0000 (0:00:00.627) 0:00:31.741 ***** 2025-11-08 14:01:19.366118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.366139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.366166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.366178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.366388 | orchestrator | 2025-11-08 14:01:19.366415 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-11-08 14:01:19.366426 | orchestrator | Saturday 08 November 2025 13:58:48 +0000 (0:00:06.104) 0:00:37.845 ***** 2025-11-08 14:01:19.366443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.366455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 14:01:19.366480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.366492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.366504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.366515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.366526 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:19.366543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.367048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.367151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 14:01:19.367168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 14:01:19.367181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367309 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:19.367322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367333 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:19.367345 | orchestrator | 2025-11-08 14:01:19.367357 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-11-08 14:01:19.367369 | orchestrator | Saturday 08 November 2025 13:58:49 +0000 (0:00:01.080) 0:00:38.926 ***** 2025-11-08 14:01:19.367386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.367405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 14:01:19.367427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367473 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:19.367485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.367508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 14:01:19.367526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.367562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 14:01:19.367573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367646 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:19.367660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.367685 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:19.367698 | orchestrator | 2025-11-08 14:01:19.367711 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-11-08 14:01:19.367723 | orchestrator | Saturday 08 November 2025 13:58:52 +0000 (0:00:02.760) 0:00:41.686 ***** 2025-11-08 14:01:19.367763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.367792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.367813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.367826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.367840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.367853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.367879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.367898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.367917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.367932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.367945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.367958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.367971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.367989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368058 | orchestrator | 2025-11-08 14:01:19.368069 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-11-08 14:01:19.368081 | orchestrator | Saturday 08 November 2025 13:58:58 +0000 (0:00:06.340) 0:00:48.027 ***** 2025-11-08 14:01:19.368092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.368116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.368128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.368146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368358 | orchestrator | 2025-11-08 14:01:19.368369 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-11-08 14:01:19.368380 | orchestrator | Saturday 08 November 2025 13:59:18 +0000 (0:00:20.030) 0:01:08.058 ***** 2025-11-08 14:01:19.368391 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-08 14:01:19.368402 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-08 14:01:19.368502 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-08 14:01:19.368514 | orchestrator | 2025-11-08 14:01:19.368525 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-11-08 14:01:19.368536 | orchestrator | Saturday 08 November 2025 13:59:24 +0000 (0:00:05.613) 0:01:13.672 ***** 2025-11-08 14:01:19.368547 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-08 14:01:19.368557 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-08 14:01:19.368568 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-08 14:01:19.368579 | orchestrator | 2025-11-08 14:01:19.368590 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-11-08 14:01:19.368601 | orchestrator | Saturday 08 November 2025 13:59:28 +0000 (0:00:04.197) 0:01:17.869 ***** 2025-11-08 14:01:19.368612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.368630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.368650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.368662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.368693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.368704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.368720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.368768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.368780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.368810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.368821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.368841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.368853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.368901 | orchestrator | 2025-11-08 14:01:19.368912 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-11-08 14:01:19.368923 | orchestrator | Saturday 08 November 2025 13:59:31 +0000 (0:00:03.031) 0:01:20.900 ***** 2025-11-08 14:01:19.368934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.368951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.368963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.368980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369262 | orchestrator | 2025-11-08 14:01:19.369273 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-08 14:01:19.369284 | orchestrator | Saturday 08 November 2025 13:59:33 +0000 (0:00:02.635) 0:01:23.536 ***** 2025-11-08 14:01:19.369295 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:19.369307 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:19.369318 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:19.369328 | orchestrator | 2025-11-08 14:01:19.369339 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-11-08 14:01:19.369351 | orchestrator | Saturday 08 November 2025 13:59:34 +0000 (0:00:00.555) 0:01:24.091 ***** 2025-11-08 14:01:19.369362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.369374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 14:01:19.369390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369452 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:19.369463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.369475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 14:01:19.369491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369549 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:19.369560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-08 14:01:19.369571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-08 14:01:19.369582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:01:19.369644 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:19.369655 | orchestrator | 2025-11-08 14:01:19.369666 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-11-08 14:01:19.369677 | orchestrator | Saturday 08 November 2025 13:59:35 +0000 (0:00:01.089) 0:01:25.181 ***** 2025-11-08 14:01:19.369689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.369700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.369717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-08 14:01:19.369813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.369993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.370005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.370100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:01:19.370118 | orchestrator | 2025-11-08 14:01:19.370129 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-08 14:01:19.370141 | orchestrator | Saturday 08 November 2025 13:59:40 +0000 (0:00:05.051) 0:01:30.233 ***** 2025-11-08 14:01:19.370152 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:19.370163 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:19.370174 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:19.370186 | orchestrator | 2025-11-08 14:01:19.370196 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-11-08 14:01:19.370207 | orchestrator | Saturday 08 November 2025 13:59:40 +0000 (0:00:00.268) 0:01:30.501 ***** 2025-11-08 14:01:19.370219 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-11-08 14:01:19.370230 | orchestrator | 2025-11-08 14:01:19.370241 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-11-08 14:01:19.370251 | orchestrator | Saturday 08 November 2025 13:59:43 +0000 (0:00:02.247) 0:01:32.748 ***** 2025-11-08 14:01:19.370262 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-08 14:01:19.370273 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-11-08 14:01:19.370284 | orchestrator | 2025-11-08 14:01:19.370295 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-11-08 14:01:19.370306 | orchestrator | Saturday 08 November 2025 13:59:45 +0000 (0:00:02.713) 0:01:35.462 ***** 2025-11-08 14:01:19.370316 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:19.370327 | orchestrator | 2025-11-08 14:01:19.370338 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-08 14:01:19.370348 | orchestrator | Saturday 08 November 2025 14:00:01 +0000 (0:00:15.968) 0:01:51.430 ***** 2025-11-08 14:01:19.370359 | orchestrator | 2025-11-08 14:01:19.370369 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-08 14:01:19.370380 | orchestrator | Saturday 08 November 2025 14:00:02 +0000 (0:00:00.460) 0:01:51.891 ***** 2025-11-08 14:01:19.370391 | orchestrator | 2025-11-08 14:01:19.370401 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-08 14:01:19.370421 | orchestrator | Saturday 08 November 2025 14:00:02 +0000 (0:00:00.100) 0:01:51.991 ***** 2025-11-08 14:01:19.370432 | orchestrator | 2025-11-08 14:01:19.370442 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-11-08 14:01:19.370453 | orchestrator | Saturday 08 November 2025 14:00:02 +0000 (0:00:00.096) 0:01:52.088 ***** 2025-11-08 14:01:19.370464 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:19.370475 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:19.370485 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:19.370496 | orchestrator | 2025-11-08 14:01:19.370507 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-11-08 14:01:19.370517 | orchestrator | Saturday 08 November 2025 14:00:14 +0000 (0:00:12.355) 0:02:04.444 ***** 2025-11-08 14:01:19.370528 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:19.370539 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:19.370550 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:19.370561 | orchestrator | 2025-11-08 14:01:19.370572 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-11-08 14:01:19.370583 | orchestrator | Saturday 08 November 2025 14:00:26 +0000 (0:00:11.484) 0:02:15.929 ***** 2025-11-08 14:01:19.370594 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:19.370605 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:19.370616 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:19.370627 | orchestrator | 2025-11-08 14:01:19.370638 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-11-08 14:01:19.370648 | orchestrator | Saturday 08 November 2025 14:00:38 +0000 (0:00:12.198) 0:02:28.127 ***** 2025-11-08 14:01:19.370659 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:19.370670 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:19.370681 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:19.370691 | orchestrator | 2025-11-08 14:01:19.370702 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-11-08 14:01:19.370713 | orchestrator | Saturday 08 November 2025 14:00:49 +0000 (0:00:11.258) 0:02:39.386 ***** 2025-11-08 14:01:19.370723 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:19.370751 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:19.370769 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:19.370780 | orchestrator | 2025-11-08 14:01:19.370791 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-11-08 14:01:19.370802 | orchestrator | Saturday 08 November 2025 14:01:00 +0000 (0:00:10.805) 0:02:50.192 ***** 2025-11-08 14:01:19.370813 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:19.370823 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:19.370834 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:19.370845 | orchestrator | 2025-11-08 14:01:19.370856 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-11-08 14:01:19.370867 | orchestrator | Saturday 08 November 2025 14:01:11 +0000 (0:00:11.088) 0:03:01.280 ***** 2025-11-08 14:01:19.370878 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:19.370889 | orchestrator | 2025-11-08 14:01:19.370899 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:01:19.370911 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 14:01:19.370922 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 14:01:19.370934 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 14:01:19.370945 | orchestrator | 2025-11-08 14:01:19.370955 | orchestrator | 2025-11-08 14:01:19.370972 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:01:19.370984 | orchestrator | Saturday 08 November 2025 14:01:18 +0000 (0:00:07.085) 0:03:08.365 ***** 2025-11-08 14:01:19.371002 | orchestrator | =============================================================================== 2025-11-08 14:01:19.371012 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.03s 2025-11-08 14:01:19.371023 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.97s 2025-11-08 14:01:19.371034 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.36s 2025-11-08 14:01:19.371045 | orchestrator | designate : Restart designate-central container ------------------------ 12.20s 2025-11-08 14:01:19.371055 | orchestrator | designate : Restart designate-api container ---------------------------- 11.48s 2025-11-08 14:01:19.371066 | orchestrator | designate : Restart designate-producer container ----------------------- 11.26s 2025-11-08 14:01:19.371077 | orchestrator | designate : Restart designate-worker container ------------------------- 11.09s 2025-11-08 14:01:19.371088 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.81s 2025-11-08 14:01:19.371098 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.09s 2025-11-08 14:01:19.371109 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.61s 2025-11-08 14:01:19.371120 | orchestrator | designate : Copying over config.json files for services ----------------- 6.34s 2025-11-08 14:01:19.371131 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.10s 2025-11-08 14:01:19.371142 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.61s 2025-11-08 14:01:19.371152 | orchestrator | designate : Check designate containers ---------------------------------- 5.05s 2025-11-08 14:01:19.371163 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.20s 2025-11-08 14:01:19.371174 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.15s 2025-11-08 14:01:19.371185 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.90s 2025-11-08 14:01:19.371195 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.65s 2025-11-08 14:01:19.371206 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.62s 2025-11-08 14:01:19.371217 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.32s 2025-11-08 14:01:19.371228 | orchestrator | 2025-11-08 14:01:19 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:22.402911 | orchestrator | 2025-11-08 14:01:22 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:22.403293 | orchestrator | 2025-11-08 14:01:22 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:22.404121 | orchestrator | 2025-11-08 14:01:22 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:22.406069 | orchestrator | 2025-11-08 14:01:22 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state STARTED 2025-11-08 14:01:22.406115 | orchestrator | 2025-11-08 14:01:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:25.447571 | orchestrator | 2025-11-08 14:01:25 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:25.450485 | orchestrator | 2025-11-08 14:01:25 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:25.451815 | orchestrator | 2025-11-08 14:01:25 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:25.454073 | orchestrator | 2025-11-08 14:01:25 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:25.458588 | orchestrator | 2025-11-08 14:01:25 | INFO  | Task 458dda79-f6f6-4d52-8f9b-366962991c3a is in state SUCCESS 2025-11-08 14:01:25.460621 | orchestrator | 2025-11-08 14:01:25.460664 | orchestrator | 2025-11-08 14:01:25.460671 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:01:25.460694 | orchestrator | 2025-11-08 14:01:25.460699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:01:25.460704 | orchestrator | Saturday 08 November 2025 13:58:02 +0000 (0:00:00.278) 0:00:00.278 ***** 2025-11-08 14:01:25.460709 | orchestrator | ok: [testbed-manager] 2025-11-08 14:01:25.460714 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:01:25.460719 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:01:25.460723 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:01:25.460752 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:01:25.460757 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:01:25.460761 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:01:25.460765 | orchestrator | 2025-11-08 14:01:25.460770 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:01:25.460774 | orchestrator | Saturday 08 November 2025 13:58:03 +0000 (0:00:01.068) 0:00:01.347 ***** 2025-11-08 14:01:25.460781 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-11-08 14:01:25.460786 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-11-08 14:01:25.460790 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-11-08 14:01:25.460794 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-11-08 14:01:25.460799 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-11-08 14:01:25.460803 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-11-08 14:01:25.460807 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-11-08 14:01:25.460812 | orchestrator | 2025-11-08 14:01:25.460816 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-11-08 14:01:25.460820 | orchestrator | 2025-11-08 14:01:25.460825 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-11-08 14:01:25.460829 | orchestrator | Saturday 08 November 2025 13:58:04 +0000 (0:00:00.901) 0:00:02.248 ***** 2025-11-08 14:01:25.460834 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 14:01:25.460841 | orchestrator | 2025-11-08 14:01:25.460845 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-11-08 14:01:25.460849 | orchestrator | Saturday 08 November 2025 13:58:05 +0000 (0:00:01.589) 0:00:03.838 ***** 2025-11-08 14:01:25.460857 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-08 14:01:25.460867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.460873 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.460894 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.460915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.460923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.460931 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-08 14:01:25.460942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.460951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.460959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.460972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.460991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.460998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.461007 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461021 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461030 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461149 | orchestrator | 2025-11-08 14:01:25.461157 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-11-08 14:01:25.461164 | orchestrator | Saturday 08 November 2025 13:58:09 +0000 (0:00:03.597) 0:00:07.436 ***** 2025-11-08 14:01:25.461171 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 14:01:25.461176 | orchestrator | 2025-11-08 14:01:25.461180 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-11-08 14:01:25.461184 | orchestrator | Saturday 08 November 2025 13:58:10 +0000 (0:00:01.416) 0:00:08.852 ***** 2025-11-08 14:01:25.461189 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-08 14:01:25.461201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.461206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.461210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.461220 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.461225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.461229 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.461234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.461247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461252 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461308 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461320 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-08 14:01:25.461338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461363 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.461402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.461620 | orchestrator | 2025-11-08 14:01:25.461627 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-11-08 14:01:25.461632 | orchestrator | Saturday 08 November 2025 13:58:16 +0000 (0:00:06.198) 0:00:15.051 ***** 2025-11-08 14:01:25.461639 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-08 14:01:25.461659 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.461664 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.461671 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-08 14:01:25.461693 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.461698 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:01:25.461703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.461709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.461718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.461722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.461858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.461868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.461874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.461891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.461899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.461906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.461924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.461932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.461962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.461996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462007 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.462011 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.462048 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.462480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.462506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462532 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.462539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.462546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462562 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.462569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.462581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462616 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.462621 | orchestrator | 2025-11-08 14:01:25.462625 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-11-08 14:01:25.462630 | orchestrator | Saturday 08 November 2025 13:58:18 +0000 (0:00:01.597) 0:00:16.648 ***** 2025-11-08 14:01:25.462636 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-08 14:01:25.462641 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.462646 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462651 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-08 14:01:25.462657 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462664 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:01:25.462751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.462765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.462777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462853 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.462858 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.462862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.462867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-08 14:01:25.462885 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.462911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.462917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462927 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.462932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.462937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462946 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.462951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-08 14:01:25.462956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:20242025-11-08 14:01:25 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:25.462978 | orchestrator | .2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-08 14:01:25.462985 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.462990 | orchestrator | 2025-11-08 14:01:25.462994 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-11-08 14:01:25.462999 | orchestrator | Saturday 08 November 2025 13:58:20 +0000 (0:00:01.934) 0:00:18.583 ***** 2025-11-08 14:01:25.463003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.463008 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-08 14:01:25.463013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.463018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.463023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.463046 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.463058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.463064 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.463336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.463351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.463358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.463367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.463375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.463399 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.463426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.463433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.463437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.463442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.463446 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.463451 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-08 14:01:25.463461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.463480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.463486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.463490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.463495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.463499 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.463504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.463512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.463518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.463523 | orchestrator | 2025-11-08 14:01:25.463527 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-11-08 14:01:25.463534 | orchestrator | Saturday 08 November 2025 13:58:26 +0000 (0:00:05.916) 0:00:24.499 ***** 2025-11-08 14:01:25.463539 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 14:01:25.463544 | orchestrator | 2025-11-08 14:01:25.463560 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-11-08 14:01:25.463565 | orchestrator | Saturday 08 November 2025 13:58:27 +0000 (0:00:01.298) 0:00:25.798 ***** 2025-11-08 14:01:25.463569 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090939, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463577 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090939, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463582 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090959, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.764417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463587 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090939, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463595 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090939, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463599 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090939, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.463619 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090930, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7598753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463625 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090959, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.764417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463629 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090939, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463634 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090939, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463642 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090959, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.764417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463647 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090959, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.764417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463651 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090952, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7624598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463671 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090959, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.764417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463676 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090930, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7598753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463681 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090930, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7598753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463685 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090924, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7584078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463693 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090930, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7598753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463698 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090959, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.764417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463702 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090930, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7598753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463720 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090952, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7624598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463770 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090942, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463779 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090952, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7624598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463784 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090952, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7624598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463795 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090924, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7584078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463799 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090930, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7598753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463804 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090952, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7624598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463815 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090942, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463820 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090924, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7584078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463824 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090952, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7624598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463829 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090949, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.761969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463838 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090949, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.761969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463843 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090924, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7584078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463847 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090959, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.764417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.463855 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090942, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463864 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090924, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7584078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463869 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090924, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7584078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.463878 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090942, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464010 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090944, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7611172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464019 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090944, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7611172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464026 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090949, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.761969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464038 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090942, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464062 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090942, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464071 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090949, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.761969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464096 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090935, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7600935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464101 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090949, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.761969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464106 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090944, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7611172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464111 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090944, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7611172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464115 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090935, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7600935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464135 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090944, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7611172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464141 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090949, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.761969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464150 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090935, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7600935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464155 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090935, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7600935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464159 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090957, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7636974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464164 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090957, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7636974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464168 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090930, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7598753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464186 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090957, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7636974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464195 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090957, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7636974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464200 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090944, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7611172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464204 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090918, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7577336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464209 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090935, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7600935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464213 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090918, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7577336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464218 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090918, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7577336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464236 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090918, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7577336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464251 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090957, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7636974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464255 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090935, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7600935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464260 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090973, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7659926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464264 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090973, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7659926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464268 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090973, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7659926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464273 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090918, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7577336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464294 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090955, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7630107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464303 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090957, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7636974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464307 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090973, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7659926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464311 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090955, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7630107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464316 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090952, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7624598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464323 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090927, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7589662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464330 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090955, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7630107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464346 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090918, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7577336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464358 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090973, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7659926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464366 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090921, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7579515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464372 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090955, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7630107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464376 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090973, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7659926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464381 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090927, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7589662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464385 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090927, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7589662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464401 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090947, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7617073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464407 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090927, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7589662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464411 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090955, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7630107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464415 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090921, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7579515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464420 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090955, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7630107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464424 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090945, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7613995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464428 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090921, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7579515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464443 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090921, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7579515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464448 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090947, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7617073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464452 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090927, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7589662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464457 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090924, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7584078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464461 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090947, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7617073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464466 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090945, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7613995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464470 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090927, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7589662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464491 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090971, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.765256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464497 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.464502 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090947, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7617073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464507 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090971, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.765256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464511 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.464516 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090921, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7579515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464520 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090945, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7613995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464525 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090945, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7613995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464532 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090921, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7579515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464545 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090947, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7617073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464549 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090971, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.765256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464554 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.464559 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090947, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7617073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464563 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090971, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.765256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464567 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.464571 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090942, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.760678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464576 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090945, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7613995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464583 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090945, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7613995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464593 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090971, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.765256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464597 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.464602 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090971, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.765256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-08 14:01:25.464606 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.464610 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090949, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.761969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464615 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090944, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7611172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464619 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090935, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7600935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464631 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090957, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7636974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464636 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090918, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7577336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464646 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090973, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7659926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464651 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090955, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7630107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464655 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090927, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7589662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464659 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090921, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7579515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464664 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090947, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7617073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464672 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090945, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7613995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464677 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090971, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.765256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-08 14:01:25.464681 | orchestrator | 2025-11-08 14:01:25.464685 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-11-08 14:01:25.464690 | orchestrator | Saturday 08 November 2025 13:58:54 +0000 (0:00:27.322) 0:00:53.121 ***** 2025-11-08 14:01:25.464697 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 14:01:25.464701 | orchestrator | 2025-11-08 14:01:25.464708 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-11-08 14:01:25.464713 | orchestrator | Saturday 08 November 2025 13:58:55 +0000 (0:00:00.563) 0:00:53.684 ***** 2025-11-08 14:01:25.464717 | orchestrator | [WARNING]: Skipped 2025-11-08 14:01:25.464722 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464748 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-11-08 14:01:25.464754 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464758 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-11-08 14:01:25.464762 | orchestrator | [WARNING]: Skipped 2025-11-08 14:01:25.464766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464770 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-11-08 14:01:25.464775 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464779 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-11-08 14:01:25.464783 | orchestrator | [WARNING]: Skipped 2025-11-08 14:01:25.464787 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464791 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-11-08 14:01:25.464795 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464799 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-11-08 14:01:25.464803 | orchestrator | [WARNING]: Skipped 2025-11-08 14:01:25.464808 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464812 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-11-08 14:01:25.464816 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464821 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-11-08 14:01:25.464825 | orchestrator | [WARNING]: Skipped 2025-11-08 14:01:25.464829 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464833 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-11-08 14:01:25.464842 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464846 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-11-08 14:01:25.464850 | orchestrator | [WARNING]: Skipped 2025-11-08 14:01:25.464854 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464859 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-11-08 14:01:25.464863 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464867 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-11-08 14:01:25.464871 | orchestrator | [WARNING]: Skipped 2025-11-08 14:01:25.464876 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464880 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-11-08 14:01:25.464884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-08 14:01:25.464888 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-11-08 14:01:25.464892 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 14:01:25.464896 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-08 14:01:25.464900 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 14:01:25.464904 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-08 14:01:25.464909 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-08 14:01:25.464913 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-08 14:01:25.464917 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-08 14:01:25.464921 | orchestrator | 2025-11-08 14:01:25.464925 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-11-08 14:01:25.464929 | orchestrator | Saturday 08 November 2025 13:58:57 +0000 (0:00:01.949) 0:00:55.633 ***** 2025-11-08 14:01:25.464934 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-08 14:01:25.464946 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.464950 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-08 14:01:25.464954 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.464959 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-08 14:01:25.464963 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.464968 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-08 14:01:25.464972 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.464976 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-08 14:01:25.464980 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.464985 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-08 14:01:25.464989 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.464993 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-11-08 14:01:25.464997 | orchestrator | 2025-11-08 14:01:25.465001 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-11-08 14:01:25.465005 | orchestrator | Saturday 08 November 2025 13:59:24 +0000 (0:00:26.945) 0:01:22.579 ***** 2025-11-08 14:01:25.465013 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-08 14:01:25.465017 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.465025 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-08 14:01:25.465030 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.465034 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-08 14:01:25.465043 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.465047 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-08 14:01:25.465051 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.465056 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-08 14:01:25.465060 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.465064 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-08 14:01:25.465068 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.465072 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-11-08 14:01:25.465077 | orchestrator | 2025-11-08 14:01:25.465081 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-11-08 14:01:25.465085 | orchestrator | Saturday 08 November 2025 13:59:28 +0000 (0:00:04.290) 0:01:26.870 ***** 2025-11-08 14:01:25.465089 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-08 14:01:25.465094 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.465098 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-08 14:01:25.465102 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.465107 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-08 14:01:25.465111 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.465115 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-08 14:01:25.465119 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.465123 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-11-08 14:01:25.465128 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-08 14:01:25.465132 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.465136 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-08 14:01:25.465141 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.465145 | orchestrator | 2025-11-08 14:01:25.465149 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-11-08 14:01:25.465153 | orchestrator | Saturday 08 November 2025 13:59:31 +0000 (0:00:02.560) 0:01:29.431 ***** 2025-11-08 14:01:25.465157 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 14:01:25.465161 | orchestrator | 2025-11-08 14:01:25.465165 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-11-08 14:01:25.465169 | orchestrator | Saturday 08 November 2025 13:59:32 +0000 (0:00:00.918) 0:01:30.349 ***** 2025-11-08 14:01:25.465174 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:01:25.465178 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.465182 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.465186 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.465190 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.465194 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.465198 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.465202 | orchestrator | 2025-11-08 14:01:25.465206 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-11-08 14:01:25.465211 | orchestrator | Saturday 08 November 2025 13:59:33 +0000 (0:00:00.939) 0:01:31.288 ***** 2025-11-08 14:01:25.465215 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:01:25.465219 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.465227 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.465231 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.465235 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:25.465239 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:25.465243 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:25.465247 | orchestrator | 2025-11-08 14:01:25.465251 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-11-08 14:01:25.465256 | orchestrator | Saturday 08 November 2025 13:59:35 +0000 (0:00:02.780) 0:01:34.069 ***** 2025-11-08 14:01:25.465260 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-08 14:01:25.465264 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.465268 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-08 14:01:25.465272 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.465277 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-08 14:01:25.465281 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.465285 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-08 14:01:25.465289 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.465296 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-08 14:01:25.465304 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:01:25.465308 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-08 14:01:25.465312 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.465316 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-08 14:01:25.465320 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.465325 | orchestrator | 2025-11-08 14:01:25.465329 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-11-08 14:01:25.465334 | orchestrator | Saturday 08 November 2025 13:59:38 +0000 (0:00:02.401) 0:01:36.471 ***** 2025-11-08 14:01:25.465338 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-08 14:01:25.465342 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-08 14:01:25.465347 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.465351 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.465355 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-08 14:01:25.465359 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.465363 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-08 14:01:25.465368 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.465372 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-08 14:01:25.465376 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.465380 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-11-08 14:01:25.465384 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-08 14:01:25.465389 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.465393 | orchestrator | 2025-11-08 14:01:25.465397 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-11-08 14:01:25.465401 | orchestrator | Saturday 08 November 2025 13:59:39 +0000 (0:00:01.173) 0:01:37.644 ***** 2025-11-08 14:01:25.465405 | orchestrator | [WARNING]: Skipped 2025-11-08 14:01:25.465409 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-11-08 14:01:25.465413 | orchestrator | due to this access issue: 2025-11-08 14:01:25.465422 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-11-08 14:01:25.465426 | orchestrator | not a directory 2025-11-08 14:01:25.465430 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-08 14:01:25.465434 | orchestrator | 2025-11-08 14:01:25.465438 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-11-08 14:01:25.465443 | orchestrator | Saturday 08 November 2025 13:59:40 +0000 (0:00:01.464) 0:01:39.108 ***** 2025-11-08 14:01:25.465454 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:01:25.465458 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.465462 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.465466 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.465471 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.465475 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.465479 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.465483 | orchestrator | 2025-11-08 14:01:25.465487 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-11-08 14:01:25.465491 | orchestrator | Saturday 08 November 2025 13:59:41 +0000 (0:00:00.933) 0:01:40.042 ***** 2025-11-08 14:01:25.465495 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:01:25.465500 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:01:25.465504 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:01:25.465508 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:01:25.465512 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:01:25.465516 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:01:25.465520 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:01:25.465524 | orchestrator | 2025-11-08 14:01:25.465528 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-11-08 14:01:25.465532 | orchestrator | Saturday 08 November 2025 13:59:42 +0000 (0:00:01.123) 0:01:41.165 ***** 2025-11-08 14:01:25.465537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.465546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.465554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.465559 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-08 14:01:25.465568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.465572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.465577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.465581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.465585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.465595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.465600 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-08 14:01:25.465605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.465613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.465618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.465622 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.465627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.465631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.465640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.465645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.465654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.465659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.465663 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-08 14:01:25.465669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.465674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.465683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-08 14:01:25.465688 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.465695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.465700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.465705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-08 14:01:25.465709 | orchestrator | 2025-11-08 14:01:25.465713 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-11-08 14:01:25.465718 | orchestrator | Saturday 08 November 2025 13:59:47 +0000 (0:00:04.885) 0:01:46.050 ***** 2025-11-08 14:01:25.465740 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-08 14:01:25.465744 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:01:25.465749 | orchestrator | 2025-11-08 14:01:25.465753 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-08 14:01:25.465757 | orchestrator | Saturday 08 November 2025 13:59:49 +0000 (0:00:01.318) 0:01:47.369 ***** 2025-11-08 14:01:25.465761 | orchestrator | 2025-11-08 14:01:25.465766 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-08 14:01:25.465770 | orchestrator | Saturday 08 November 2025 13:59:49 +0000 (0:00:00.065) 0:01:47.434 ***** 2025-11-08 14:01:25.465774 | orchestrator | 2025-11-08 14:01:25.465778 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-08 14:01:25.465782 | orchestrator | Saturday 08 November 2025 13:59:49 +0000 (0:00:00.059) 0:01:47.494 ***** 2025-11-08 14:01:25.465786 | orchestrator | 2025-11-08 14:01:25.465790 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-08 14:01:25.465795 | orchestrator | Saturday 08 November 2025 13:59:49 +0000 (0:00:00.056) 0:01:47.550 ***** 2025-11-08 14:01:25.465799 | orchestrator | 2025-11-08 14:01:25.465803 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-08 14:01:25.465807 | orchestrator | Saturday 08 November 2025 13:59:49 +0000 (0:00:00.163) 0:01:47.714 ***** 2025-11-08 14:01:25.465811 | orchestrator | 2025-11-08 14:01:25.465815 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-08 14:01:25.465820 | orchestrator | Saturday 08 November 2025 13:59:49 +0000 (0:00:00.088) 0:01:47.802 ***** 2025-11-08 14:01:25.465824 | orchestrator | 2025-11-08 14:01:25.465828 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-08 14:01:25.465836 | orchestrator | Saturday 08 November 2025 13:59:49 +0000 (0:00:00.117) 0:01:47.920 ***** 2025-11-08 14:01:25.465840 | orchestrator | 2025-11-08 14:01:25.465845 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-11-08 14:01:25.465849 | orchestrator | Saturday 08 November 2025 13:59:49 +0000 (0:00:00.162) 0:01:48.082 ***** 2025-11-08 14:01:25.465853 | orchestrator | changed: [testbed-manager] 2025-11-08 14:01:25.465857 | orchestrator | 2025-11-08 14:01:25.465861 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-11-08 14:01:25.465868 | orchestrator | Saturday 08 November 2025 14:00:04 +0000 (0:00:14.732) 0:02:02.815 ***** 2025-11-08 14:01:25.465875 | orchestrator | changed: [testbed-manager] 2025-11-08 14:01:25.465880 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:25.465884 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:25.465888 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:25.465892 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:01:25.465896 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:01:25.465900 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:01:25.465905 | orchestrator | 2025-11-08 14:01:25.465909 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-11-08 14:01:25.465913 | orchestrator | Saturday 08 November 2025 14:00:15 +0000 (0:00:10.780) 0:02:13.595 ***** 2025-11-08 14:01:25.465917 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:25.465922 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:25.465926 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:25.465930 | orchestrator | 2025-11-08 14:01:25.465934 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-11-08 14:01:25.465938 | orchestrator | Saturday 08 November 2025 14:00:23 +0000 (0:00:07.865) 0:02:21.461 ***** 2025-11-08 14:01:25.465943 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:25.465947 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:25.465951 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:25.465955 | orchestrator | 2025-11-08 14:01:25.465959 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-11-08 14:01:25.465963 | orchestrator | Saturday 08 November 2025 14:00:33 +0000 (0:00:10.746) 0:02:32.208 ***** 2025-11-08 14:01:25.465967 | orchestrator | changed: [testbed-manager] 2025-11-08 14:01:25.465971 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:25.465975 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:25.465979 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:01:25.465983 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:01:25.465987 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:01:25.465992 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:25.465996 | orchestrator | 2025-11-08 14:01:25.466000 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-11-08 14:01:25.466004 | orchestrator | Saturday 08 November 2025 14:00:49 +0000 (0:00:15.320) 0:02:47.529 ***** 2025-11-08 14:01:25.466008 | orchestrator | changed: [testbed-manager] 2025-11-08 14:01:25.466012 | orchestrator | 2025-11-08 14:01:25.466043 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-11-08 14:01:25.466047 | orchestrator | Saturday 08 November 2025 14:00:57 +0000 (0:00:08.674) 0:02:56.203 ***** 2025-11-08 14:01:25.466051 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:01:25.466055 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:01:25.466059 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:01:25.466063 | orchestrator | 2025-11-08 14:01:25.466067 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-11-08 14:01:25.466072 | orchestrator | Saturday 08 November 2025 14:01:07 +0000 (0:00:10.038) 0:03:06.242 ***** 2025-11-08 14:01:25.466076 | orchestrator | changed: [testbed-manager] 2025-11-08 14:01:25.466080 | orchestrator | 2025-11-08 14:01:25.466084 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-11-08 14:01:25.466088 | orchestrator | Saturday 08 November 2025 14:01:17 +0000 (0:00:09.960) 0:03:16.202 ***** 2025-11-08 14:01:25.466096 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:01:25.466100 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:01:25.466104 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:01:25.466109 | orchestrator | 2025-11-08 14:01:25.466113 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:01:25.466117 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-08 14:01:25.466122 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-08 14:01:25.466126 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-08 14:01:25.466130 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-08 14:01:25.466134 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-08 14:01:25.466139 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-08 14:01:25.466143 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-08 14:01:25.466147 | orchestrator | 2025-11-08 14:01:25.466152 | orchestrator | 2025-11-08 14:01:25.466156 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:01:25.466160 | orchestrator | Saturday 08 November 2025 14:01:23 +0000 (0:00:05.513) 0:03:21.716 ***** 2025-11-08 14:01:25.466175 | orchestrator | =============================================================================== 2025-11-08 14:01:25.466180 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.32s 2025-11-08 14:01:25.466184 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 26.95s 2025-11-08 14:01:25.466188 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.32s 2025-11-08 14:01:25.466192 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.73s 2025-11-08 14:01:25.466200 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 10.78s 2025-11-08 14:01:25.466209 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.75s 2025-11-08 14:01:25.466213 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.04s 2025-11-08 14:01:25.466217 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.96s 2025-11-08 14:01:25.466221 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.67s 2025-11-08 14:01:25.466225 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 7.87s 2025-11-08 14:01:25.466230 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.20s 2025-11-08 14:01:25.466234 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.92s 2025-11-08 14:01:25.466238 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.51s 2025-11-08 14:01:25.466242 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.89s 2025-11-08 14:01:25.466246 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.29s 2025-11-08 14:01:25.466250 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.60s 2025-11-08 14:01:25.466254 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.78s 2025-11-08 14:01:25.466258 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.56s 2025-11-08 14:01:25.466263 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.40s 2025-11-08 14:01:25.466286 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.95s 2025-11-08 14:01:28.516530 | orchestrator | 2025-11-08 14:01:28 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:28.516708 | orchestrator | 2025-11-08 14:01:28 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:28.518315 | orchestrator | 2025-11-08 14:01:28 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:28.519768 | orchestrator | 2025-11-08 14:01:28 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:28.520115 | orchestrator | 2025-11-08 14:01:28 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:31.562840 | orchestrator | 2025-11-08 14:01:31 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:31.563657 | orchestrator | 2025-11-08 14:01:31 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:31.565529 | orchestrator | 2025-11-08 14:01:31 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:31.568287 | orchestrator | 2025-11-08 14:01:31 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:31.568315 | orchestrator | 2025-11-08 14:01:31 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:34.601265 | orchestrator | 2025-11-08 14:01:34 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:34.601500 | orchestrator | 2025-11-08 14:01:34 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:34.602276 | orchestrator | 2025-11-08 14:01:34 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:34.603060 | orchestrator | 2025-11-08 14:01:34 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:34.603107 | orchestrator | 2025-11-08 14:01:34 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:37.653477 | orchestrator | 2025-11-08 14:01:37 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:37.656582 | orchestrator | 2025-11-08 14:01:37 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:37.659458 | orchestrator | 2025-11-08 14:01:37 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:37.662239 | orchestrator | 2025-11-08 14:01:37 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:37.662324 | orchestrator | 2025-11-08 14:01:37 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:40.697843 | orchestrator | 2025-11-08 14:01:40 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:40.698884 | orchestrator | 2025-11-08 14:01:40 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:40.700542 | orchestrator | 2025-11-08 14:01:40 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:40.702217 | orchestrator | 2025-11-08 14:01:40 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:40.702266 | orchestrator | 2025-11-08 14:01:40 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:43.740212 | orchestrator | 2025-11-08 14:01:43 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:43.742063 | orchestrator | 2025-11-08 14:01:43 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:43.744158 | orchestrator | 2025-11-08 14:01:43 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:43.745369 | orchestrator | 2025-11-08 14:01:43 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:43.745408 | orchestrator | 2025-11-08 14:01:43 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:46.781952 | orchestrator | 2025-11-08 14:01:46 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:46.782930 | orchestrator | 2025-11-08 14:01:46 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:46.784256 | orchestrator | 2025-11-08 14:01:46 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:46.785489 | orchestrator | 2025-11-08 14:01:46 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:46.785530 | orchestrator | 2025-11-08 14:01:46 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:49.824814 | orchestrator | 2025-11-08 14:01:49 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:49.824901 | orchestrator | 2025-11-08 14:01:49 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:49.824908 | orchestrator | 2025-11-08 14:01:49 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:49.825457 | orchestrator | 2025-11-08 14:01:49 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:49.825521 | orchestrator | 2025-11-08 14:01:49 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:52.878920 | orchestrator | 2025-11-08 14:01:52 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:52.879182 | orchestrator | 2025-11-08 14:01:52 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:52.880406 | orchestrator | 2025-11-08 14:01:52 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:52.881654 | orchestrator | 2025-11-08 14:01:52 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:52.881725 | orchestrator | 2025-11-08 14:01:52 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:55.921349 | orchestrator | 2025-11-08 14:01:55 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:55.922267 | orchestrator | 2025-11-08 14:01:55 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:55.924038 | orchestrator | 2025-11-08 14:01:55 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:55.924747 | orchestrator | 2025-11-08 14:01:55 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:55.924796 | orchestrator | 2025-11-08 14:01:55 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:01:58.966245 | orchestrator | 2025-11-08 14:01:58 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:01:58.968388 | orchestrator | 2025-11-08 14:01:58 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:01:58.970827 | orchestrator | 2025-11-08 14:01:58 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:01:58.972220 | orchestrator | 2025-11-08 14:01:58 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:01:58.972662 | orchestrator | 2025-11-08 14:01:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:02.024420 | orchestrator | 2025-11-08 14:02:02 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:02:02.024539 | orchestrator | 2025-11-08 14:02:02 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:02.025965 | orchestrator | 2025-11-08 14:02:02 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:02.027334 | orchestrator | 2025-11-08 14:02:02 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:02.029098 | orchestrator | 2025-11-08 14:02:02 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:05.068263 | orchestrator | 2025-11-08 14:02:05 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:02:05.069106 | orchestrator | 2025-11-08 14:02:05 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:05.070845 | orchestrator | 2025-11-08 14:02:05 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:05.071261 | orchestrator | 2025-11-08 14:02:05 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:05.072151 | orchestrator | 2025-11-08 14:02:05 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:08.106656 | orchestrator | 2025-11-08 14:02:08 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:02:08.107378 | orchestrator | 2025-11-08 14:02:08 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:08.108499 | orchestrator | 2025-11-08 14:02:08 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:08.109656 | orchestrator | 2025-11-08 14:02:08 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:08.109895 | orchestrator | 2025-11-08 14:02:08 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:11.147816 | orchestrator | 2025-11-08 14:02:11 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:02:11.149598 | orchestrator | 2025-11-08 14:02:11 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:11.151670 | orchestrator | 2025-11-08 14:02:11 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:11.152955 | orchestrator | 2025-11-08 14:02:11 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:11.153750 | orchestrator | 2025-11-08 14:02:11 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:14.192954 | orchestrator | 2025-11-08 14:02:14 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:02:14.193992 | orchestrator | 2025-11-08 14:02:14 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:14.195404 | orchestrator | 2025-11-08 14:02:14 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:14.196980 | orchestrator | 2025-11-08 14:02:14 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:14.197034 | orchestrator | 2025-11-08 14:02:14 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:17.231443 | orchestrator | 2025-11-08 14:02:17 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:02:17.232954 | orchestrator | 2025-11-08 14:02:17 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:17.238361 | orchestrator | 2025-11-08 14:02:17 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:17.240097 | orchestrator | 2025-11-08 14:02:17 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:17.240187 | orchestrator | 2025-11-08 14:02:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:20.276234 | orchestrator | 2025-11-08 14:02:20 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:02:20.276313 | orchestrator | 2025-11-08 14:02:20 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:20.277397 | orchestrator | 2025-11-08 14:02:20 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:20.278897 | orchestrator | 2025-11-08 14:02:20 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:20.278926 | orchestrator | 2025-11-08 14:02:20 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:23.313302 | orchestrator | 2025-11-08 14:02:23 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:02:23.313578 | orchestrator | 2025-11-08 14:02:23 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:23.314800 | orchestrator | 2025-11-08 14:02:23 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:23.316019 | orchestrator | 2025-11-08 14:02:23 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:23.316062 | orchestrator | 2025-11-08 14:02:23 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:26.353302 | orchestrator | 2025-11-08 14:02:26 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:02:26.353761 | orchestrator | 2025-11-08 14:02:26 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:26.354612 | orchestrator | 2025-11-08 14:02:26 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:26.355457 | orchestrator | 2025-11-08 14:02:26 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:26.355559 | orchestrator | 2025-11-08 14:02:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:29.383569 | orchestrator | 2025-11-08 14:02:29 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state STARTED 2025-11-08 14:02:29.384035 | orchestrator | 2025-11-08 14:02:29 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:29.384887 | orchestrator | 2025-11-08 14:02:29 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:29.387176 | orchestrator | 2025-11-08 14:02:29 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:29.387205 | orchestrator | 2025-11-08 14:02:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:32.425447 | orchestrator | 2025-11-08 14:02:32 | INFO  | Task df643b7b-3d6d-4172-b479-e9e17506723d is in state SUCCESS 2025-11-08 14:02:32.426348 | orchestrator | 2025-11-08 14:02:32.426389 | orchestrator | 2025-11-08 14:02:32.426399 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:02:32.426408 | orchestrator | 2025-11-08 14:02:32.426417 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:02:32.426426 | orchestrator | Saturday 08 November 2025 14:01:23 +0000 (0:00:00.244) 0:00:00.244 ***** 2025-11-08 14:02:32.426435 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:02:32.426444 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:02:32.426453 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:02:32.426461 | orchestrator | 2025-11-08 14:02:32.426469 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:02:32.426477 | orchestrator | Saturday 08 November 2025 14:01:23 +0000 (0:00:00.274) 0:00:00.519 ***** 2025-11-08 14:02:32.426486 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-11-08 14:02:32.426518 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-11-08 14:02:32.426527 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-11-08 14:02:32.426535 | orchestrator | 2025-11-08 14:02:32.426543 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-11-08 14:02:32.426551 | orchestrator | 2025-11-08 14:02:32.426559 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-08 14:02:32.426567 | orchestrator | Saturday 08 November 2025 14:01:23 +0000 (0:00:00.392) 0:00:00.911 ***** 2025-11-08 14:02:32.426575 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:02:32.426584 | orchestrator | 2025-11-08 14:02:32.426592 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-11-08 14:02:32.426600 | orchestrator | Saturday 08 November 2025 14:01:24 +0000 (0:00:00.502) 0:00:01.414 ***** 2025-11-08 14:02:32.426608 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-11-08 14:02:32.426616 | orchestrator | 2025-11-08 14:02:32.426624 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-11-08 14:02:32.426632 | orchestrator | Saturday 08 November 2025 14:01:28 +0000 (0:00:03.772) 0:00:05.186 ***** 2025-11-08 14:02:32.426640 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-11-08 14:02:32.426648 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-11-08 14:02:32.426695 | orchestrator | 2025-11-08 14:02:32.426704 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-11-08 14:02:32.426712 | orchestrator | Saturday 08 November 2025 14:01:34 +0000 (0:00:06.538) 0:00:11.724 ***** 2025-11-08 14:02:32.426720 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-08 14:02:32.426728 | orchestrator | 2025-11-08 14:02:32.426736 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-11-08 14:02:32.426743 | orchestrator | Saturday 08 November 2025 14:01:38 +0000 (0:00:03.365) 0:00:15.090 ***** 2025-11-08 14:02:32.426751 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-08 14:02:32.426759 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-11-08 14:02:32.426767 | orchestrator | 2025-11-08 14:02:32.426775 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-11-08 14:02:32.426783 | orchestrator | Saturday 08 November 2025 14:01:41 +0000 (0:00:03.951) 0:00:19.042 ***** 2025-11-08 14:02:32.426791 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-08 14:02:32.426799 | orchestrator | 2025-11-08 14:02:32.426807 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-11-08 14:02:32.426815 | orchestrator | Saturday 08 November 2025 14:01:45 +0000 (0:00:03.563) 0:00:22.605 ***** 2025-11-08 14:02:32.426823 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-11-08 14:02:32.426830 | orchestrator | 2025-11-08 14:02:32.426838 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-08 14:02:32.426846 | orchestrator | Saturday 08 November 2025 14:01:49 +0000 (0:00:03.915) 0:00:26.520 ***** 2025-11-08 14:02:32.426855 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:02:32.426863 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:02:32.426871 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:02:32.426878 | orchestrator | 2025-11-08 14:02:32.426900 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-11-08 14:02:32.426908 | orchestrator | Saturday 08 November 2025 14:01:49 +0000 (0:00:00.326) 0:00:26.847 ***** 2025-11-08 14:02:32.426920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.426951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.426962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.426972 | orchestrator | 2025-11-08 14:02:32.426986 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-11-08 14:02:32.426995 | orchestrator | Saturday 08 November 2025 14:01:50 +0000 (0:00:00.829) 0:00:27.677 ***** 2025-11-08 14:02:32.427004 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:02:32.427012 | orchestrator | 2025-11-08 14:02:32.427021 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-11-08 14:02:32.427030 | orchestrator | Saturday 08 November 2025 14:01:50 +0000 (0:00:00.133) 0:00:27.810 ***** 2025-11-08 14:02:32.427039 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:02:32.427048 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:02:32.427057 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:02:32.427065 | orchestrator | 2025-11-08 14:02:32.427074 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-08 14:02:32.427083 | orchestrator | Saturday 08 November 2025 14:01:51 +0000 (0:00:00.546) 0:00:28.357 ***** 2025-11-08 14:02:32.427093 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:02:32.427101 | orchestrator | 2025-11-08 14:02:32.427110 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-11-08 14:02:32.427119 | orchestrator | Saturday 08 November 2025 14:01:51 +0000 (0:00:00.598) 0:00:28.956 ***** 2025-11-08 14:02:32.427133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.427165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.427179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.427201 | orchestrator | 2025-11-08 14:02:32.427216 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-11-08 14:02:32.427228 | orchestrator | Saturday 08 November 2025 14:01:53 +0000 (0:00:01.448) 0:00:30.404 ***** 2025-11-08 14:02:32.427241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 14:02:32.427254 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:02:32.427274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 14:02:32.427298 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:02:32.427318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 14:02:32.427333 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:02:32.427346 | orchestrator | 2025-11-08 14:02:32.427358 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-11-08 14:02:32.427371 | orchestrator | Saturday 08 November 2025 14:01:54 +0000 (0:00:00.991) 0:00:31.395 ***** 2025-11-08 14:02:32.427384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 14:02:32.427398 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:02:32.427411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 14:02:32.427425 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:02:32.427444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 14:02:32.427464 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:02:32.427478 | orchestrator | 2025-11-08 14:02:32.427490 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-11-08 14:02:32.427502 | orchestrator | Saturday 08 November 2025 14:01:55 +0000 (0:00:00.713) 0:00:32.109 ***** 2025-11-08 14:02:32.427523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.427538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.427552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.427566 | orchestrator | 2025-11-08 14:02:32.427580 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-11-08 14:02:32.427593 | orchestrator | Saturday 08 November 2025 14:01:56 +0000 (0:00:01.376) 0:00:33.485 ***** 2025-11-08 14:02:32.427615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.427636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.427687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.427701 | orchestrator | 2025-11-08 14:02:32.427714 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-11-08 14:02:32.427727 | orchestrator | Saturday 08 November 2025 14:01:59 +0000 (0:00:02.663) 0:00:36.148 ***** 2025-11-08 14:02:32.427857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-08 14:02:32.427872 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-08 14:02:32.427880 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-08 14:02:32.427888 | orchestrator | 2025-11-08 14:02:32.427896 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-11-08 14:02:32.427904 | orchestrator | Saturday 08 November 2025 14:02:00 +0000 (0:00:01.617) 0:00:37.766 ***** 2025-11-08 14:02:32.427912 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:02:32.427920 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:02:32.427928 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:02:32.427936 | orchestrator | 2025-11-08 14:02:32.427943 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-11-08 14:02:32.427960 | orchestrator | Saturday 08 November 2025 14:02:02 +0000 (0:00:01.629) 0:00:39.395 ***** 2025-11-08 14:02:32.427968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 14:02:32.427977 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:02:32.427991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 14:02:32.427999 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:02:32.428016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-08 14:02:32.428025 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:02:32.428033 | orchestrator | 2025-11-08 14:02:32.428040 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-11-08 14:02:32.428048 | orchestrator | Saturday 08 November 2025 14:02:02 +0000 (0:00:00.635) 0:00:40.031 ***** 2025-11-08 14:02:32.428056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.428071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.428089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-08 14:02:32.428098 | orchestrator | 2025-11-08 14:02:32.428106 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-11-08 14:02:32.428114 | orchestrator | Saturday 08 November 2025 14:02:04 +0000 (0:00:01.711) 0:00:41.743 ***** 2025-11-08 14:02:32.428122 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:02:32.428129 | orchestrator | 2025-11-08 14:02:32.428137 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-11-08 14:02:32.428145 | orchestrator | Saturday 08 November 2025 14:02:07 +0000 (0:00:03.112) 0:00:44.855 ***** 2025-11-08 14:02:32.428153 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:02:32.428161 | orchestrator | 2025-11-08 14:02:32.428169 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-11-08 14:02:32.428176 | orchestrator | Saturday 08 November 2025 14:02:10 +0000 (0:00:02.476) 0:00:47.332 ***** 2025-11-08 14:02:32.428184 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:02:32.428192 | orchestrator | 2025-11-08 14:02:32.428200 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-08 14:02:32.428208 | orchestrator | Saturday 08 November 2025 14:02:24 +0000 (0:00:14.562) 0:01:01.895 ***** 2025-11-08 14:02:32.428215 | orchestrator | 2025-11-08 14:02:32.428223 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-08 14:02:32.428231 | orchestrator | Saturday 08 November 2025 14:02:24 +0000 (0:00:00.060) 0:01:01.955 ***** 2025-11-08 14:02:32.428239 | orchestrator | 2025-11-08 14:02:32.428258 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-08 14:02:32.428272 | orchestrator | Saturday 08 November 2025 14:02:24 +0000 (0:00:00.058) 0:01:02.013 ***** 2025-11-08 14:02:32.428286 | orchestrator | 2025-11-08 14:02:32.428299 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-11-08 14:02:32.428313 | orchestrator | Saturday 08 November 2025 14:02:25 +0000 (0:00:00.062) 0:01:02.076 ***** 2025-11-08 14:02:32.428327 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:02:32.428342 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:02:32.428365 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:02:32.428380 | orchestrator | 2025-11-08 14:02:32.428393 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:02:32.428408 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 14:02:32.428425 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 14:02:32.428440 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 14:02:32.428455 | orchestrator | 2025-11-08 14:02:32.428469 | orchestrator | 2025-11-08 14:02:32.428477 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:02:32.428488 | orchestrator | Saturday 08 November 2025 14:02:30 +0000 (0:00:05.501) 0:01:07.577 ***** 2025-11-08 14:02:32.428503 | orchestrator | =============================================================================== 2025-11-08 14:02:32.428516 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.56s 2025-11-08 14:02:32.428529 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.54s 2025-11-08 14:02:32.428543 | orchestrator | placement : Restart placement-api container ----------------------------- 5.50s 2025-11-08 14:02:32.428557 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.95s 2025-11-08 14:02:32.428570 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.92s 2025-11-08 14:02:32.428583 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.77s 2025-11-08 14:02:32.428596 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.56s 2025-11-08 14:02:32.428609 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.37s 2025-11-08 14:02:32.428622 | orchestrator | placement : Creating placement databases -------------------------------- 3.11s 2025-11-08 14:02:32.428635 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.66s 2025-11-08 14:02:32.428649 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.48s 2025-11-08 14:02:32.428689 | orchestrator | placement : Check placement containers ---------------------------------- 1.71s 2025-11-08 14:02:32.428703 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.63s 2025-11-08 14:02:32.428717 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.62s 2025-11-08 14:02:32.428730 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.45s 2025-11-08 14:02:32.428745 | orchestrator | placement : Copying over config.json files for services ----------------- 1.38s 2025-11-08 14:02:32.428759 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.99s 2025-11-08 14:02:32.428773 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.83s 2025-11-08 14:02:32.428787 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2025-11-08 14:02:32.428801 | orchestrator | placement : Copying over existing policy file --------------------------- 0.64s 2025-11-08 14:02:32.428815 | orchestrator | 2025-11-08 14:02:32 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:32.428837 | orchestrator | 2025-11-08 14:02:32 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:32.429144 | orchestrator | 2025-11-08 14:02:32 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:32.430503 | orchestrator | 2025-11-08 14:02:32 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:02:32.430639 | orchestrator | 2025-11-08 14:02:32 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:35.475737 | orchestrator | 2025-11-08 14:02:35 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:35.477712 | orchestrator | 2025-11-08 14:02:35 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:35.478769 | orchestrator | 2025-11-08 14:02:35 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:35.480123 | orchestrator | 2025-11-08 14:02:35 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:02:35.480154 | orchestrator | 2025-11-08 14:02:35 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:38.524233 | orchestrator | 2025-11-08 14:02:38 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:38.525165 | orchestrator | 2025-11-08 14:02:38 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:38.526263 | orchestrator | 2025-11-08 14:02:38 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:38.527876 | orchestrator | 2025-11-08 14:02:38 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:02:38.527913 | orchestrator | 2025-11-08 14:02:38 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:41.571136 | orchestrator | 2025-11-08 14:02:41 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:41.572547 | orchestrator | 2025-11-08 14:02:41 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:41.573803 | orchestrator | 2025-11-08 14:02:41 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:41.574812 | orchestrator | 2025-11-08 14:02:41 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:02:41.574841 | orchestrator | 2025-11-08 14:02:41 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:44.622090 | orchestrator | 2025-11-08 14:02:44 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:44.623727 | orchestrator | 2025-11-08 14:02:44 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:44.625543 | orchestrator | 2025-11-08 14:02:44 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:44.628218 | orchestrator | 2025-11-08 14:02:44 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:02:44.628259 | orchestrator | 2025-11-08 14:02:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:47.684562 | orchestrator | 2025-11-08 14:02:47 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:47.686117 | orchestrator | 2025-11-08 14:02:47 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:47.687851 | orchestrator | 2025-11-08 14:02:47 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:47.689371 | orchestrator | 2025-11-08 14:02:47 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:02:47.689399 | orchestrator | 2025-11-08 14:02:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:50.721794 | orchestrator | 2025-11-08 14:02:50 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:50.721899 | orchestrator | 2025-11-08 14:02:50 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:50.722874 | orchestrator | 2025-11-08 14:02:50 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:50.723798 | orchestrator | 2025-11-08 14:02:50 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:02:50.723890 | orchestrator | 2025-11-08 14:02:50 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:53.774210 | orchestrator | 2025-11-08 14:02:53 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:53.776906 | orchestrator | 2025-11-08 14:02:53 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:53.779582 | orchestrator | 2025-11-08 14:02:53 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:53.781742 | orchestrator | 2025-11-08 14:02:53 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:02:53.781782 | orchestrator | 2025-11-08 14:02:53 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:56.827727 | orchestrator | 2025-11-08 14:02:56 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:56.829188 | orchestrator | 2025-11-08 14:02:56 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:56.832656 | orchestrator | 2025-11-08 14:02:56 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:56.834269 | orchestrator | 2025-11-08 14:02:56 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:02:56.834350 | orchestrator | 2025-11-08 14:02:56 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:02:59.892119 | orchestrator | 2025-11-08 14:02:59 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:02:59.893540 | orchestrator | 2025-11-08 14:02:59 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:02:59.895131 | orchestrator | 2025-11-08 14:02:59 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:02:59.897118 | orchestrator | 2025-11-08 14:02:59 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:02:59.897169 | orchestrator | 2025-11-08 14:02:59 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:02.941787 | orchestrator | 2025-11-08 14:03:02 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:02.943844 | orchestrator | 2025-11-08 14:03:02 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:03:02.947132 | orchestrator | 2025-11-08 14:03:02 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:03:02.948727 | orchestrator | 2025-11-08 14:03:02 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:03:02.950194 | orchestrator | 2025-11-08 14:03:02 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:06.000788 | orchestrator | 2025-11-08 14:03:06 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:06.006469 | orchestrator | 2025-11-08 14:03:06 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state STARTED 2025-11-08 14:03:06.011447 | orchestrator | 2025-11-08 14:03:06 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:03:06.015426 | orchestrator | 2025-11-08 14:03:06 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state STARTED 2025-11-08 14:03:06.015514 | orchestrator | 2025-11-08 14:03:06 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:09.056043 | orchestrator | 2025-11-08 14:03:09 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:09.057860 | orchestrator | 2025-11-08 14:03:09 | INFO  | Task aa9eac1d-d631-4ee0-a94e-f29fd9052eec is in state SUCCESS 2025-11-08 14:03:09.060521 | orchestrator | 2025-11-08 14:03:09.060582 | orchestrator | 2025-11-08 14:03:09.060594 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:03:09.060604 | orchestrator | 2025-11-08 14:03:09.060668 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:03:09.060678 | orchestrator | Saturday 08 November 2025 13:58:08 +0000 (0:00:00.299) 0:00:00.299 ***** 2025-11-08 14:03:09.060687 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:03:09.060695 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:03:09.060703 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:03:09.060711 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:03:09.060836 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:03:09.060846 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:03:09.060909 | orchestrator | 2025-11-08 14:03:09.060918 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:03:09.060926 | orchestrator | Saturday 08 November 2025 13:58:09 +0000 (0:00:00.632) 0:00:00.932 ***** 2025-11-08 14:03:09.060934 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-11-08 14:03:09.060943 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-11-08 14:03:09.060951 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-11-08 14:03:09.060960 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-11-08 14:03:09.060969 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-11-08 14:03:09.060978 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-11-08 14:03:09.060987 | orchestrator | 2025-11-08 14:03:09.060995 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-11-08 14:03:09.061002 | orchestrator | 2025-11-08 14:03:09.061010 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-08 14:03:09.061035 | orchestrator | Saturday 08 November 2025 13:58:09 +0000 (0:00:00.852) 0:00:01.785 ***** 2025-11-08 14:03:09.061045 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 14:03:09.061056 | orchestrator | 2025-11-08 14:03:09.061066 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-11-08 14:03:09.061075 | orchestrator | Saturday 08 November 2025 13:58:11 +0000 (0:00:01.345) 0:00:03.130 ***** 2025-11-08 14:03:09.061084 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:03:09.061093 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:03:09.061103 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:03:09.061112 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:03:09.061160 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:03:09.061177 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:03:09.061187 | orchestrator | 2025-11-08 14:03:09.061198 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-11-08 14:03:09.061208 | orchestrator | Saturday 08 November 2025 13:58:13 +0000 (0:00:01.732) 0:00:04.863 ***** 2025-11-08 14:03:09.061218 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:03:09.061229 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:03:09.061238 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:03:09.061248 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:03:09.061271 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:03:09.061281 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:03:09.061290 | orchestrator | 2025-11-08 14:03:09.061300 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-11-08 14:03:09.061310 | orchestrator | Saturday 08 November 2025 13:58:14 +0000 (0:00:01.093) 0:00:05.957 ***** 2025-11-08 14:03:09.061320 | orchestrator | ok: [testbed-node-0] => { 2025-11-08 14:03:09.061331 | orchestrator |  "changed": false, 2025-11-08 14:03:09.061341 | orchestrator |  "msg": "All assertions passed" 2025-11-08 14:03:09.061351 | orchestrator | } 2025-11-08 14:03:09.061361 | orchestrator | ok: [testbed-node-1] => { 2025-11-08 14:03:09.061371 | orchestrator |  "changed": false, 2025-11-08 14:03:09.061380 | orchestrator |  "msg": "All assertions passed" 2025-11-08 14:03:09.061405 | orchestrator | } 2025-11-08 14:03:09.061415 | orchestrator | ok: [testbed-node-2] => { 2025-11-08 14:03:09.061424 | orchestrator |  "changed": false, 2025-11-08 14:03:09.061436 | orchestrator |  "msg": "All assertions passed" 2025-11-08 14:03:09.061446 | orchestrator | } 2025-11-08 14:03:09.061456 | orchestrator | ok: [testbed-node-3] => { 2025-11-08 14:03:09.061464 | orchestrator |  "changed": false, 2025-11-08 14:03:09.061473 | orchestrator |  "msg": "All assertions passed" 2025-11-08 14:03:09.061482 | orchestrator | } 2025-11-08 14:03:09.061491 | orchestrator | ok: [testbed-node-4] => { 2025-11-08 14:03:09.061500 | orchestrator |  "changed": false, 2025-11-08 14:03:09.061509 | orchestrator |  "msg": "All assertions passed" 2025-11-08 14:03:09.061518 | orchestrator | } 2025-11-08 14:03:09.061527 | orchestrator | ok: [testbed-node-5] => { 2025-11-08 14:03:09.061537 | orchestrator |  "changed": false, 2025-11-08 14:03:09.061547 | orchestrator |  "msg": "All assertions passed" 2025-11-08 14:03:09.061555 | orchestrator | } 2025-11-08 14:03:09.061563 | orchestrator | 2025-11-08 14:03:09.061572 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-11-08 14:03:09.061581 | orchestrator | Saturday 08 November 2025 13:58:14 +0000 (0:00:00.757) 0:00:06.714 ***** 2025-11-08 14:03:09.061590 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.061599 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.061630 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.061640 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.061649 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.061657 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.061665 | orchestrator | 2025-11-08 14:03:09.061674 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-11-08 14:03:09.061682 | orchestrator | Saturday 08 November 2025 13:58:15 +0000 (0:00:00.596) 0:00:07.311 ***** 2025-11-08 14:03:09.061691 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-11-08 14:03:09.061699 | orchestrator | 2025-11-08 14:03:09.061708 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-11-08 14:03:09.061716 | orchestrator | Saturday 08 November 2025 13:58:18 +0000 (0:00:03.381) 0:00:10.692 ***** 2025-11-08 14:03:09.061725 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-11-08 14:03:09.061735 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-11-08 14:03:09.061743 | orchestrator | 2025-11-08 14:03:09.061765 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-11-08 14:03:09.061774 | orchestrator | Saturday 08 November 2025 13:58:25 +0000 (0:00:06.963) 0:00:17.655 ***** 2025-11-08 14:03:09.061783 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-08 14:03:09.061792 | orchestrator | 2025-11-08 14:03:09.061800 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-11-08 14:03:09.061809 | orchestrator | Saturday 08 November 2025 13:58:29 +0000 (0:00:03.396) 0:00:21.051 ***** 2025-11-08 14:03:09.061817 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-08 14:03:09.061826 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-11-08 14:03:09.061835 | orchestrator | 2025-11-08 14:03:09.061843 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-11-08 14:03:09.061851 | orchestrator | Saturday 08 November 2025 13:58:32 +0000 (0:00:03.572) 0:00:24.624 ***** 2025-11-08 14:03:09.061860 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-08 14:03:09.061868 | orchestrator | 2025-11-08 14:03:09.061877 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-11-08 14:03:09.061903 | orchestrator | Saturday 08 November 2025 13:58:36 +0000 (0:00:03.483) 0:00:28.107 ***** 2025-11-08 14:03:09.061913 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-11-08 14:03:09.061921 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-11-08 14:03:09.061944 | orchestrator | 2025-11-08 14:03:09.061952 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-08 14:03:09.061960 | orchestrator | Saturday 08 November 2025 13:58:43 +0000 (0:00:07.571) 0:00:35.679 ***** 2025-11-08 14:03:09.061968 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.061983 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.061992 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.062001 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.062009 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.062060 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.062071 | orchestrator | 2025-11-08 14:03:09.062080 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-11-08 14:03:09.062090 | orchestrator | Saturday 08 November 2025 13:58:44 +0000 (0:00:00.797) 0:00:36.477 ***** 2025-11-08 14:03:09.062099 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.062108 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.062117 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.062125 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.062134 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.062142 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.062151 | orchestrator | 2025-11-08 14:03:09.062160 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-11-08 14:03:09.062169 | orchestrator | Saturday 08 November 2025 13:58:47 +0000 (0:00:02.668) 0:00:39.145 ***** 2025-11-08 14:03:09.062178 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:03:09.062187 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:03:09.062196 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:03:09.062206 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:03:09.062214 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:03:09.062223 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:03:09.062232 | orchestrator | 2025-11-08 14:03:09.062242 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-11-08 14:03:09.062251 | orchestrator | Saturday 08 November 2025 13:58:48 +0000 (0:00:01.046) 0:00:40.192 ***** 2025-11-08 14:03:09.062260 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.062268 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.062277 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.062286 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.062295 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.062304 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.062312 | orchestrator | 2025-11-08 14:03:09.062321 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-11-08 14:03:09.062330 | orchestrator | Saturday 08 November 2025 13:58:52 +0000 (0:00:03.873) 0:00:44.065 ***** 2025-11-08 14:03:09.062343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.062366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.062384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.062399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.062409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.062418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.062428 | orchestrator | 2025-11-08 14:03:09.062437 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-11-08 14:03:09.062446 | orchestrator | Saturday 08 November 2025 13:58:55 +0000 (0:00:03.219) 0:00:47.284 ***** 2025-11-08 14:03:09.062463 | orchestrator | [WARNING]: Skipped 2025-11-08 14:03:09.062472 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-11-08 14:03:09.062481 | orchestrator | due to this access issue: 2025-11-08 14:03:09.062490 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-11-08 14:03:09.062499 | orchestrator | a directory 2025-11-08 14:03:09.062507 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 14:03:09.062516 | orchestrator | 2025-11-08 14:03:09.062525 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-08 14:03:09.062540 | orchestrator | Saturday 08 November 2025 13:58:57 +0000 (0:00:01.736) 0:00:49.021 ***** 2025-11-08 14:03:09.062550 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 14:03:09.062561 | orchestrator | 2025-11-08 14:03:09.062569 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-11-08 14:03:09.062578 | orchestrator | Saturday 08 November 2025 13:58:59 +0000 (0:00:01.794) 0:00:50.816 ***** 2025-11-08 14:03:09.062587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.062601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.062634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.062643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.062666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.062675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.062683 | orchestrator | 2025-11-08 14:03:09.062695 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-11-08 14:03:09.062704 | orchestrator | Saturday 08 November 2025 13:59:03 +0000 (0:00:04.254) 0:00:55.071 ***** 2025-11-08 14:03:09.062713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.062722 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.062731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.062746 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.062756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.062764 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.062788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.062798 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.062810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.062819 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.062828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.062836 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.062845 | orchestrator | 2025-11-08 14:03:09.062853 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-11-08 14:03:09.062862 | orchestrator | Saturday 08 November 2025 13:59:06 +0000 (0:00:03.378) 0:00:58.449 ***** 2025-11-08 14:03:09.062870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.062885 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.062901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.062910 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.062919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.062927 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.062938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.062946 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.062954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.062968 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.062977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.062986 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.062994 | orchestrator | 2025-11-08 14:03:09.063002 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-11-08 14:03:09.063010 | orchestrator | Saturday 08 November 2025 13:59:09 +0000 (0:00:03.278) 0:01:01.728 ***** 2025-11-08 14:03:09.063019 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.063027 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.063035 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.063043 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.063052 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.063060 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.063068 | orchestrator | 2025-11-08 14:03:09.063076 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-11-08 14:03:09.063089 | orchestrator | Saturday 08 November 2025 13:59:12 +0000 (0:00:03.035) 0:01:04.763 ***** 2025-11-08 14:03:09.063098 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.063106 | orchestrator | 2025-11-08 14:03:09.063114 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-11-08 14:03:09.063122 | orchestrator | Saturday 08 November 2025 13:59:13 +0000 (0:00:00.153) 0:01:04.916 ***** 2025-11-08 14:03:09.063131 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.063139 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.063147 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.063155 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.063164 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.063172 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.063180 | orchestrator | 2025-11-08 14:03:09.063189 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-11-08 14:03:09.063197 | orchestrator | Saturday 08 November 2025 13:59:14 +0000 (0:00:01.230) 0:01:06.146 ***** 2025-11-08 14:03:09.063210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.063224 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.063233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.063241 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.063249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.063258 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.063271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.063300 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.063309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.063318 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.063333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.063347 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.063355 | orchestrator | 2025-11-08 14:03:09.063364 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-11-08 14:03:09.063372 | orchestrator | Saturday 08 November 2025 13:59:17 +0000 (0:00:03.202) 0:01:09.348 ***** 2025-11-08 14:03:09.063381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.063399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.063414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.063423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.063473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.063482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.063491 | orchestrator | 2025-11-08 14:03:09.063499 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-11-08 14:03:09.063508 | orchestrator | Saturday 08 November 2025 13:59:21 +0000 (0:00:04.367) 0:01:13.716 ***** 2025-11-08 14:03:09.063516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.063531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.063543 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.063558 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.063567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.063575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.063583 | orchestrator | 2025-11-08 14:03:09.063592 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-11-08 14:03:09.063600 | orchestrator | Saturday 08 November 2025 13:59:28 +0000 (0:00:06.851) 0:01:20.567 ***** 2025-11-08 14:03:09.063659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.063677 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.063690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.063699 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.063707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.063717 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.063725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.063734 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.063742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.063752 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.063766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.063779 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.063787 | orchestrator | 2025-11-08 14:03:09.063795 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-11-08 14:03:09.063803 | orchestrator | Saturday 08 November 2025 13:59:32 +0000 (0:00:03.398) 0:01:23.966 ***** 2025-11-08 14:03:09.063811 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:03:09.063819 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.063827 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.063836 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.063844 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:09.063853 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:03:09.063862 | orchestrator | 2025-11-08 14:03:09.063873 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-11-08 14:03:09.063882 | orchestrator | Saturday 08 November 2025 13:59:34 +0000 (0:00:02.837) 0:01:26.803 ***** 2025-11-08 14:03:09.063891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.063900 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.063909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.063918 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.063927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.063934 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.063948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.063966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.063976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.063985 | orchestrator | 2025-11-08 14:03:09.063994 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-11-08 14:03:09.064002 | orchestrator | Saturday 08 November 2025 13:59:39 +0000 (0:00:04.087) 0:01:30.890 ***** 2025-11-08 14:03:09.064010 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.064019 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064027 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.064036 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.064044 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.064053 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.064061 | orchestrator | 2025-11-08 14:03:09.064070 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-11-08 14:03:09.064078 | orchestrator | Saturday 08 November 2025 13:59:41 +0000 (0:00:02.034) 0:01:32.925 ***** 2025-11-08 14:03:09.064086 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.064095 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.064103 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064111 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.064119 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.064128 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.064136 | orchestrator | 2025-11-08 14:03:09.064144 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-11-08 14:03:09.064158 | orchestrator | Saturday 08 November 2025 13:59:43 +0000 (0:00:02.217) 0:01:35.145 ***** 2025-11-08 14:03:09.064166 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064174 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.064183 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.064191 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.064199 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.064207 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.064216 | orchestrator | 2025-11-08 14:03:09.064224 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-11-08 14:03:09.064232 | orchestrator | Saturday 08 November 2025 13:59:47 +0000 (0:00:03.901) 0:01:39.046 ***** 2025-11-08 14:03:09.064241 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.064249 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064257 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.064264 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.064272 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.064280 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.064287 | orchestrator | 2025-11-08 14:03:09.064295 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-11-08 14:03:09.064304 | orchestrator | Saturday 08 November 2025 13:59:49 +0000 (0:00:01.850) 0:01:40.897 ***** 2025-11-08 14:03:09.064313 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.064322 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064330 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.064338 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.064351 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.064360 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.064369 | orchestrator | 2025-11-08 14:03:09.064377 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-11-08 14:03:09.064385 | orchestrator | Saturday 08 November 2025 13:59:50 +0000 (0:00:01.864) 0:01:42.762 ***** 2025-11-08 14:03:09.064393 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.064401 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064408 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.064417 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.064425 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.064433 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.064442 | orchestrator | 2025-11-08 14:03:09.064450 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-11-08 14:03:09.064458 | orchestrator | Saturday 08 November 2025 13:59:53 +0000 (0:00:02.266) 0:01:45.029 ***** 2025-11-08 14:03:09.064467 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-08 14:03:09.064476 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064484 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-08 14:03:09.064493 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.064501 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-08 14:03:09.064509 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.064518 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-08 14:03:09.064526 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.064539 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-08 14:03:09.064547 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.064555 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-08 14:03:09.064563 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.064571 | orchestrator | 2025-11-08 14:03:09.064579 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-11-08 14:03:09.064588 | orchestrator | Saturday 08 November 2025 13:59:56 +0000 (0:00:02.969) 0:01:47.998 ***** 2025-11-08 14:03:09.064602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.064627 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.064642 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.064657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.064665 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.064674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.064682 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.064697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.064711 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.064719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.064728 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.064736 | orchestrator | 2025-11-08 14:03:09.064744 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-11-08 14:03:09.064753 | orchestrator | Saturday 08 November 2025 13:59:58 +0000 (0:00:02.198) 0:01:50.196 ***** 2025-11-08 14:03:09.064761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.064770 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.064793 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.064804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.064819 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.064827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.064835 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.064844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.064853 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.064861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.064869 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.064877 | orchestrator | 2025-11-08 14:03:09.064885 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-11-08 14:03:09.064894 | orchestrator | Saturday 08 November 2025 14:00:00 +0000 (0:00:02.405) 0:01:52.602 ***** 2025-11-08 14:03:09.064902 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064916 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.064924 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.064932 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.064940 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.064947 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.064955 | orchestrator | 2025-11-08 14:03:09.064963 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-11-08 14:03:09.064971 | orchestrator | Saturday 08 November 2025 14:00:06 +0000 (0:00:05.765) 0:01:58.367 ***** 2025-11-08 14:03:09.064979 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.064986 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.065000 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.065009 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:03:09.065017 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:03:09.065026 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:03:09.065034 | orchestrator | 2025-11-08 14:03:09.065043 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-11-08 14:03:09.065051 | orchestrator | Saturday 08 November 2025 14:00:12 +0000 (0:00:06.244) 0:02:04.612 ***** 2025-11-08 14:03:09.065059 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.065067 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.065076 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.065084 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.065092 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.065101 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.065109 | orchestrator | 2025-11-08 14:03:09.065118 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-11-08 14:03:09.065126 | orchestrator | Saturday 08 November 2025 14:00:15 +0000 (0:00:02.976) 0:02:07.589 ***** 2025-11-08 14:03:09.065134 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.065142 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.065154 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.065162 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.065170 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.065179 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.065187 | orchestrator | 2025-11-08 14:03:09.065195 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-11-08 14:03:09.065204 | orchestrator | Saturday 08 November 2025 14:00:19 +0000 (0:00:04.125) 0:02:11.714 ***** 2025-11-08 14:03:09.065212 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.065220 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.065228 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.065237 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.065245 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.065253 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.065262 | orchestrator | 2025-11-08 14:03:09.065270 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-11-08 14:03:09.065278 | orchestrator | Saturday 08 November 2025 14:00:21 +0000 (0:00:01.815) 0:02:13.529 ***** 2025-11-08 14:03:09.065286 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.065295 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.065303 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.065311 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.065319 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.065327 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.065335 | orchestrator | 2025-11-08 14:03:09.065343 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-11-08 14:03:09.065351 | orchestrator | Saturday 08 November 2025 14:00:23 +0000 (0:00:01.759) 0:02:15.289 ***** 2025-11-08 14:03:09.065360 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.065368 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.065377 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.065385 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.065393 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.065401 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.065409 | orchestrator | 2025-11-08 14:03:09.065417 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-11-08 14:03:09.065425 | orchestrator | Saturday 08 November 2025 14:00:25 +0000 (0:00:02.280) 0:02:17.569 ***** 2025-11-08 14:03:09.065433 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.065442 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.065451 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.065459 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.065468 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.065481 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.065489 | orchestrator | 2025-11-08 14:03:09.065497 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-11-08 14:03:09.065505 | orchestrator | Saturday 08 November 2025 14:00:28 +0000 (0:00:02.858) 0:02:20.428 ***** 2025-11-08 14:03:09.065514 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.065522 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.065530 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.065539 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.065547 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.065555 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.065564 | orchestrator | 2025-11-08 14:03:09.065572 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-11-08 14:03:09.065581 | orchestrator | Saturday 08 November 2025 14:00:30 +0000 (0:00:02.032) 0:02:22.460 ***** 2025-11-08 14:03:09.065589 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-08 14:03:09.065598 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.065607 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-08 14:03:09.065657 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.065665 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-08 14:03:09.065673 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.065682 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-08 14:03:09.065690 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.065704 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-08 14:03:09.065713 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.065721 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-08 14:03:09.065730 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.065738 | orchestrator | 2025-11-08 14:03:09.065746 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-11-08 14:03:09.065754 | orchestrator | Saturday 08 November 2025 14:00:32 +0000 (0:00:01.956) 0:02:24.416 ***** 2025-11-08 14:03:09.065769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.065778 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.065786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.065801 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.065810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-08 14:03:09.065818 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.065827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.065842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.065850 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.065858 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.065870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-08 14:03:09.065879 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.065888 | orchestrator | 2025-11-08 14:03:09.065896 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-11-08 14:03:09.065905 | orchestrator | Saturday 08 November 2025 14:00:34 +0000 (0:00:01.832) 0:02:26.248 ***** 2025-11-08 14:03:09.065918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.065928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.065941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-08 14:03:09.065949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.065961 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.065976 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-08 14:03:09.065985 | orchestrator | 2025-11-08 14:03:09.065994 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-08 14:03:09.066002 | orchestrator | Saturday 08 November 2025 14:00:39 +0000 (0:00:04.922) 0:02:31.170 ***** 2025-11-08 14:03:09.066011 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:09.066179 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:09.066191 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:09.066199 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:03:09.066207 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:03:09.066216 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:03:09.066224 | orchestrator | 2025-11-08 14:03:09.066233 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-11-08 14:03:09.066241 | orchestrator | Saturday 08 November 2025 14:00:40 +0000 (0:00:01.319) 0:02:32.490 ***** 2025-11-08 14:03:09.066249 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:09.066257 | orchestrator | 2025-11-08 14:03:09.066265 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-11-08 14:03:09.066273 | orchestrator | Saturday 08 November 2025 14:00:42 +0000 (0:00:02.180) 0:02:34.671 ***** 2025-11-08 14:03:09.066282 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:09.066290 | orchestrator | 2025-11-08 14:03:09.066298 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-11-08 14:03:09.066306 | orchestrator | Saturday 08 November 2025 14:00:45 +0000 (0:00:02.302) 0:02:36.973 ***** 2025-11-08 14:03:09.066314 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:09.066322 | orchestrator | 2025-11-08 14:03:09.066330 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-08 14:03:09.066339 | orchestrator | Saturday 08 November 2025 14:01:28 +0000 (0:00:43.753) 0:03:20.727 ***** 2025-11-08 14:03:09.066347 | orchestrator | 2025-11-08 14:03:09.066355 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-08 14:03:09.066363 | orchestrator | Saturday 08 November 2025 14:01:28 +0000 (0:00:00.071) 0:03:20.799 ***** 2025-11-08 14:03:09.066371 | orchestrator | 2025-11-08 14:03:09.066379 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-08 14:03:09.066387 | orchestrator | Saturday 08 November 2025 14:01:29 +0000 (0:00:00.283) 0:03:21.082 ***** 2025-11-08 14:03:09.066395 | orchestrator | 2025-11-08 14:03:09.066403 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-08 14:03:09.066411 | orchestrator | Saturday 08 November 2025 14:01:29 +0000 (0:00:00.067) 0:03:21.150 ***** 2025-11-08 14:03:09.066419 | orchestrator | 2025-11-08 14:03:09.066433 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-08 14:03:09.066441 | orchestrator | Saturday 08 November 2025 14:01:29 +0000 (0:00:00.064) 0:03:21.215 ***** 2025-11-08 14:03:09.066449 | orchestrator | 2025-11-08 14:03:09.066457 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-08 14:03:09.066465 | orchestrator | Saturday 08 November 2025 14:01:29 +0000 (0:00:00.067) 0:03:21.283 ***** 2025-11-08 14:03:09.066481 | orchestrator | 2025-11-08 14:03:09.066489 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-11-08 14:03:09.066498 | orchestrator | Saturday 08 November 2025 14:01:29 +0000 (0:00:00.070) 0:03:21.353 ***** 2025-11-08 14:03:09.066507 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:09.066515 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:03:09.066523 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:03:09.066531 | orchestrator | 2025-11-08 14:03:09.066538 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-11-08 14:03:09.066545 | orchestrator | Saturday 08 November 2025 14:02:00 +0000 (0:00:31.194) 0:03:52.548 ***** 2025-11-08 14:03:09.066553 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:03:09.066561 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:03:09.066569 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:03:09.066578 | orchestrator | 2025-11-08 14:03:09.066586 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:03:09.066595 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-08 14:03:09.066629 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-11-08 14:03:09.066640 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-11-08 14:03:09.066648 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-08 14:03:09.066656 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-08 14:03:09.066664 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-08 14:03:09.066672 | orchestrator | 2025-11-08 14:03:09.066680 | orchestrator | 2025-11-08 14:03:09.066689 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:03:09.066698 | orchestrator | Saturday 08 November 2025 14:03:05 +0000 (0:01:05.187) 0:04:57.735 ***** 2025-11-08 14:03:09.066707 | orchestrator | =============================================================================== 2025-11-08 14:03:09.066715 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 65.19s 2025-11-08 14:03:09.066724 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.75s 2025-11-08 14:03:09.066733 | orchestrator | neutron : Restart neutron-server container ----------------------------- 31.19s 2025-11-08 14:03:09.066741 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.57s 2025-11-08 14:03:09.066749 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.96s 2025-11-08 14:03:09.066758 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.85s 2025-11-08 14:03:09.066767 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.24s 2025-11-08 14:03:09.066775 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 5.77s 2025-11-08 14:03:09.066784 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.92s 2025-11-08 14:03:09.066793 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.37s 2025-11-08 14:03:09.066801 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.25s 2025-11-08 14:03:09.066811 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.13s 2025-11-08 14:03:09.066820 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.09s 2025-11-08 14:03:09.066830 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.90s 2025-11-08 14:03:09.066850 | orchestrator | Setting sysctl values --------------------------------------------------- 3.87s 2025-11-08 14:03:09.066859 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.57s 2025-11-08 14:03:09.066869 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.48s 2025-11-08 14:03:09.066879 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.40s 2025-11-08 14:03:09.066888 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.40s 2025-11-08 14:03:09.066898 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.38s 2025-11-08 14:03:09.066908 | orchestrator | 2025-11-08 14:03:09 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:03:09.066917 | orchestrator | 2025-11-08 14:03:09 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:09.066927 | orchestrator | 2025-11-08 14:03:09 | INFO  | Task 6921b7c9-0a84-4b97-b989-9a72cfa8e6d7 is in state SUCCESS 2025-11-08 14:03:09.066941 | orchestrator | 2025-11-08 14:03:09 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:09.066949 | orchestrator | 2025-11-08 14:03:09 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:12.101555 | orchestrator | 2025-11-08 14:03:12 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:12.102475 | orchestrator | 2025-11-08 14:03:12 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:03:12.104025 | orchestrator | 2025-11-08 14:03:12 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:12.105165 | orchestrator | 2025-11-08 14:03:12 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:12.105222 | orchestrator | 2025-11-08 14:03:12 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:15.147332 | orchestrator | 2025-11-08 14:03:15 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:15.148558 | orchestrator | 2025-11-08 14:03:15 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:03:15.149825 | orchestrator | 2025-11-08 14:03:15 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:15.150825 | orchestrator | 2025-11-08 14:03:15 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:15.150879 | orchestrator | 2025-11-08 14:03:15 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:18.193060 | orchestrator | 2025-11-08 14:03:18 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:18.196083 | orchestrator | 2025-11-08 14:03:18 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:03:18.198655 | orchestrator | 2025-11-08 14:03:18 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:18.201184 | orchestrator | 2025-11-08 14:03:18 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:18.201943 | orchestrator | 2025-11-08 14:03:18 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:21.239145 | orchestrator | 2025-11-08 14:03:21 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:21.241750 | orchestrator | 2025-11-08 14:03:21 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state STARTED 2025-11-08 14:03:21.244405 | orchestrator | 2025-11-08 14:03:21 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:21.246484 | orchestrator | 2025-11-08 14:03:21 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:21.246744 | orchestrator | 2025-11-08 14:03:21 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:24.295554 | orchestrator | 2025-11-08 14:03:24 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:24.298349 | orchestrator | 2025-11-08 14:03:24 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:24.302080 | orchestrator | 2025-11-08 14:03:24 | INFO  | Task 90a56aba-d6ae-4ffd-a8d1-5ea23252086b is in state SUCCESS 2025-11-08 14:03:24.304900 | orchestrator | 2025-11-08 14:03:24.304979 | orchestrator | 2025-11-08 14:03:24.304995 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:03:24.305008 | orchestrator | 2025-11-08 14:03:24.305019 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:03:24.305030 | orchestrator | Saturday 08 November 2025 14:02:35 +0000 (0:00:00.289) 0:00:00.289 ***** 2025-11-08 14:03:24.305041 | orchestrator | ok: [testbed-manager] 2025-11-08 14:03:24.305053 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:03:24.305064 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:03:24.305075 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:03:24.305086 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:03:24.305097 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:03:24.305107 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:03:24.305118 | orchestrator | 2025-11-08 14:03:24.305129 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:03:24.305140 | orchestrator | Saturday 08 November 2025 14:02:36 +0000 (0:00:00.982) 0:00:01.271 ***** 2025-11-08 14:03:24.305151 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-11-08 14:03:24.305163 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-11-08 14:03:24.305174 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-11-08 14:03:24.305184 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-11-08 14:03:24.305195 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-11-08 14:03:24.305205 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-11-08 14:03:24.305216 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-11-08 14:03:24.305227 | orchestrator | 2025-11-08 14:03:24.305238 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-11-08 14:03:24.305249 | orchestrator | 2025-11-08 14:03:24.305260 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-11-08 14:03:24.305271 | orchestrator | Saturday 08 November 2025 14:02:37 +0000 (0:00:00.879) 0:00:02.151 ***** 2025-11-08 14:03:24.305283 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:03:24.305419 | orchestrator | 2025-11-08 14:03:24.305433 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-11-08 14:03:24.305444 | orchestrator | Saturday 08 November 2025 14:02:38 +0000 (0:00:01.669) 0:00:03.820 ***** 2025-11-08 14:03:24.305455 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-11-08 14:03:24.305466 | orchestrator | 2025-11-08 14:03:24.305477 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-11-08 14:03:24.305488 | orchestrator | Saturday 08 November 2025 14:02:42 +0000 (0:00:03.581) 0:00:07.401 ***** 2025-11-08 14:03:24.305500 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-11-08 14:03:24.305513 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-11-08 14:03:24.305524 | orchestrator | 2025-11-08 14:03:24.305535 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-11-08 14:03:24.305546 | orchestrator | Saturday 08 November 2025 14:02:48 +0000 (0:00:06.428) 0:00:13.829 ***** 2025-11-08 14:03:24.305585 | orchestrator | ok: [testbed-manager] => (item=service) 2025-11-08 14:03:24.305634 | orchestrator | 2025-11-08 14:03:24.305660 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-11-08 14:03:24.305672 | orchestrator | Saturday 08 November 2025 14:02:52 +0000 (0:00:03.253) 0:00:17.083 ***** 2025-11-08 14:03:24.305683 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-08 14:03:24.305694 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-11-08 14:03:24.305704 | orchestrator | 2025-11-08 14:03:24.305715 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-11-08 14:03:24.305726 | orchestrator | Saturday 08 November 2025 14:02:55 +0000 (0:00:03.802) 0:00:20.885 ***** 2025-11-08 14:03:24.305736 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-11-08 14:03:24.305747 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-11-08 14:03:24.305758 | orchestrator | 2025-11-08 14:03:24.305769 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-11-08 14:03:24.305779 | orchestrator | Saturday 08 November 2025 14:03:02 +0000 (0:00:06.250) 0:00:27.136 ***** 2025-11-08 14:03:24.305790 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-11-08 14:03:24.305801 | orchestrator | 2025-11-08 14:03:24.305812 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:03:24.305823 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:03:24.305835 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:03:24.305846 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:03:24.305857 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:03:24.305867 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:03:24.305895 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:03:24.305907 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:03:24.305918 | orchestrator | 2025-11-08 14:03:24.305928 | orchestrator | 2025-11-08 14:03:24.305939 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:03:24.305950 | orchestrator | Saturday 08 November 2025 14:03:07 +0000 (0:00:04.939) 0:00:32.075 ***** 2025-11-08 14:03:24.305961 | orchestrator | =============================================================================== 2025-11-08 14:03:24.305972 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.43s 2025-11-08 14:03:24.305982 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.25s 2025-11-08 14:03:24.305999 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.94s 2025-11-08 14:03:24.306082 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.80s 2025-11-08 14:03:24.306102 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.58s 2025-11-08 14:03:24.306115 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.25s 2025-11-08 14:03:24.306127 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.67s 2025-11-08 14:03:24.306139 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.98s 2025-11-08 14:03:24.306156 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2025-11-08 14:03:24.306174 | orchestrator | 2025-11-08 14:03:24.306192 | orchestrator | 2025-11-08 14:03:24.306226 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:03:24.306248 | orchestrator | 2025-11-08 14:03:24.306267 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:03:24.306287 | orchestrator | Saturday 08 November 2025 14:01:27 +0000 (0:00:00.262) 0:00:00.262 ***** 2025-11-08 14:03:24.306307 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:03:24.306327 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:03:24.306347 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:03:24.306365 | orchestrator | 2025-11-08 14:03:24.306382 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:03:24.306396 | orchestrator | Saturday 08 November 2025 14:01:28 +0000 (0:00:00.324) 0:00:00.586 ***** 2025-11-08 14:03:24.306408 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-11-08 14:03:24.306422 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-11-08 14:03:24.306434 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-11-08 14:03:24.306445 | orchestrator | 2025-11-08 14:03:24.306456 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-11-08 14:03:24.306466 | orchestrator | 2025-11-08 14:03:24.306477 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-08 14:03:24.306488 | orchestrator | Saturday 08 November 2025 14:01:28 +0000 (0:00:00.462) 0:00:01.049 ***** 2025-11-08 14:03:24.306498 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:03:24.306509 | orchestrator | 2025-11-08 14:03:24.306520 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-11-08 14:03:24.306530 | orchestrator | Saturday 08 November 2025 14:01:29 +0000 (0:00:00.571) 0:00:01.620 ***** 2025-11-08 14:03:24.306541 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-11-08 14:03:24.306551 | orchestrator | 2025-11-08 14:03:24.306569 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-11-08 14:03:24.306580 | orchestrator | Saturday 08 November 2025 14:01:32 +0000 (0:00:03.359) 0:00:04.980 ***** 2025-11-08 14:03:24.306661 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-11-08 14:03:24.306675 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-11-08 14:03:24.306686 | orchestrator | 2025-11-08 14:03:24.306697 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-11-08 14:03:24.306708 | orchestrator | Saturday 08 November 2025 14:01:38 +0000 (0:00:06.324) 0:00:11.304 ***** 2025-11-08 14:03:24.306718 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-08 14:03:24.306729 | orchestrator | 2025-11-08 14:03:24.306740 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-11-08 14:03:24.306751 | orchestrator | Saturday 08 November 2025 14:01:42 +0000 (0:00:03.275) 0:00:14.580 ***** 2025-11-08 14:03:24.306762 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-08 14:03:24.306772 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-11-08 14:03:24.306783 | orchestrator | 2025-11-08 14:03:24.306794 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-11-08 14:03:24.306804 | orchestrator | Saturday 08 November 2025 14:01:46 +0000 (0:00:03.938) 0:00:18.518 ***** 2025-11-08 14:03:24.306815 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-08 14:03:24.306826 | orchestrator | 2025-11-08 14:03:24.306836 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-11-08 14:03:24.306847 | orchestrator | Saturday 08 November 2025 14:01:49 +0000 (0:00:03.747) 0:00:22.266 ***** 2025-11-08 14:03:24.306858 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-11-08 14:03:24.306869 | orchestrator | 2025-11-08 14:03:24.306879 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-11-08 14:03:24.306890 | orchestrator | Saturday 08 November 2025 14:01:53 +0000 (0:00:03.861) 0:00:26.128 ***** 2025-11-08 14:03:24.306910 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:24.306920 | orchestrator | 2025-11-08 14:03:24.306931 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-11-08 14:03:24.306953 | orchestrator | Saturday 08 November 2025 14:01:57 +0000 (0:00:03.533) 0:00:29.661 ***** 2025-11-08 14:03:24.306965 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:24.306976 | orchestrator | 2025-11-08 14:03:24.306986 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-11-08 14:03:24.306997 | orchestrator | Saturday 08 November 2025 14:02:01 +0000 (0:00:04.107) 0:00:33.769 ***** 2025-11-08 14:03:24.307008 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:24.307018 | orchestrator | 2025-11-08 14:03:24.307029 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-11-08 14:03:24.307040 | orchestrator | Saturday 08 November 2025 14:02:04 +0000 (0:00:03.538) 0:00:37.308 ***** 2025-11-08 14:03:24.307055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.307071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.307089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.307103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.307132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.307144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.307155 | orchestrator | 2025-11-08 14:03:24.307166 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-11-08 14:03:24.307177 | orchestrator | Saturday 08 November 2025 14:02:06 +0000 (0:00:01.907) 0:00:39.215 ***** 2025-11-08 14:03:24.307188 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:24.307200 | orchestrator | 2025-11-08 14:03:24.307210 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-11-08 14:03:24.307221 | orchestrator | Saturday 08 November 2025 14:02:06 +0000 (0:00:00.140) 0:00:39.356 ***** 2025-11-08 14:03:24.307232 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:24.307242 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:24.307253 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:24.307264 | orchestrator | 2025-11-08 14:03:24.307275 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-11-08 14:03:24.307285 | orchestrator | Saturday 08 November 2025 14:02:07 +0000 (0:00:00.582) 0:00:39.938 ***** 2025-11-08 14:03:24.307296 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 14:03:24.307307 | orchestrator | 2025-11-08 14:03:24.307317 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-11-08 14:03:24.307328 | orchestrator | Saturday 08 November 2025 14:02:08 +0000 (0:00:01.120) 0:00:41.059 ***** 2025-11-08 14:03:24.307491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.307537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.307572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.307660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.307685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.307712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.307744 | orchestrator | 2025-11-08 14:03:24.307763 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-11-08 14:03:24.307783 | orchestrator | Saturday 08 November 2025 14:02:11 +0000 (0:00:02.485) 0:00:43.544 ***** 2025-11-08 14:03:24.307801 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:03:24.307819 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:03:24.307964 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:03:24.307985 | orchestrator | 2025-11-08 14:03:24.308000 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-08 14:03:24.308016 | orchestrator | Saturday 08 November 2025 14:02:11 +0000 (0:00:00.305) 0:00:43.850 ***** 2025-11-08 14:03:24.308031 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:03:24.308046 | orchestrator | 2025-11-08 14:03:24.308061 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-11-08 14:03:24.308076 | orchestrator | Saturday 08 November 2025 14:02:12 +0000 (0:00:00.664) 0:00:44.514 ***** 2025-11-08 14:03:24.308109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.308128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.308144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.308181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.308197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.308226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.308242 | orchestrator | 2025-11-08 14:03:24.308258 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-11-08 14:03:24.308273 | orchestrator | Saturday 08 November 2025 14:02:14 +0000 (0:00:02.259) 0:00:46.773 ***** 2025-11-08 14:03:24.308289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 14:03:24.308306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:03:24.308340 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:24.308365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 14:03:24.308382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:03:24.308398 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:24.308426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 14:03:24.308444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:03:24.308459 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:24.308475 | orchestrator | 2025-11-08 14:03:24.308491 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-11-08 14:03:24.308506 | orchestrator | Saturday 08 November 2025 14:02:15 +0000 (0:00:00.667) 0:00:47.441 ***** 2025-11-08 14:03:24.308528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 14:03:24.308552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:03:24.308569 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:24.308625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 14:03:24.308644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:03:24.308660 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:24.308676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 14:03:24.308716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:03:24.308734 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:24.308750 | orchestrator | 2025-11-08 14:03:24.308767 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-11-08 14:03:24.308785 | orchestrator | Saturday 08 November 2025 14:02:16 +0000 (0:00:01.031) 0:00:48.472 ***** 2025-11-08 14:03:24.308802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.308832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.308852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.308881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.308907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.308927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.308945 | orchestrator | 2025-11-08 14:03:24.308962 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-11-08 14:03:24.308980 | orchestrator | Saturday 08 November 2025 14:02:18 +0000 (0:00:02.425) 0:00:50.898 ***** 2025-11-08 14:03:24.309009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.309027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.309055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.309082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.309099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.309123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.309138 | orchestrator | 2025-11-08 14:03:24.309152 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-11-08 14:03:24.309167 | orchestrator | Saturday 08 November 2025 14:02:23 +0000 (0:00:04.849) 0:00:55.748 ***** 2025-11-08 14:03:24.309182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 14:03:24.309208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:03:24.309224 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:24.309246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 14:03:24.309263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:03:24.309279 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:24.309307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-08 14:03:24.309339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:03:24.309356 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:24.309374 | orchestrator | 2025-11-08 14:03:24.309391 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-11-08 14:03:24.309408 | orchestrator | Saturday 08 November 2025 14:02:23 +0000 (0:00:00.582) 0:00:56.331 ***** 2025-11-08 14:03:24.309434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.309452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.309481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-08 14:03:24.309500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.309523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.309533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:03:24.309543 | orchestrator | 2025-11-08 14:03:24.309562 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-08 14:03:24.309573 | orchestrator | Saturday 08 November 2025 14:02:26 +0000 (0:00:02.091) 0:00:58.422 ***** 2025-11-08 14:03:24.309583 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:03:24.309629 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:03:24.309646 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:03:24.309660 | orchestrator | 2025-11-08 14:03:24.309676 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-11-08 14:03:24.309693 | orchestrator | Saturday 08 November 2025 14:02:26 +0000 (0:00:00.278) 0:00:58.701 ***** 2025-11-08 14:03:24.309710 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:24.309726 | orchestrator | 2025-11-08 14:03:24.309743 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-11-08 14:03:24.309754 | orchestrator | Saturday 08 November 2025 14:02:28 +0000 (0:00:02.205) 0:01:00.906 ***** 2025-11-08 14:03:24.309763 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:24.309773 | orchestrator | 2025-11-08 14:03:24.309782 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-11-08 14:03:24.309792 | orchestrator | Saturday 08 November 2025 14:02:30 +0000 (0:00:02.355) 0:01:03.262 ***** 2025-11-08 14:03:24.309802 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:24.309811 | orchestrator | 2025-11-08 14:03:24.309821 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-08 14:03:24.309830 | orchestrator | Saturday 08 November 2025 14:02:48 +0000 (0:00:17.406) 0:01:20.669 ***** 2025-11-08 14:03:24.309840 | orchestrator | 2025-11-08 14:03:24.309849 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-08 14:03:24.309860 | orchestrator | Saturday 08 November 2025 14:02:48 +0000 (0:00:00.061) 0:01:20.730 ***** 2025-11-08 14:03:24.309876 | orchestrator | 2025-11-08 14:03:24.309892 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-08 14:03:24.309920 | orchestrator | Saturday 08 November 2025 14:02:48 +0000 (0:00:00.066) 0:01:20.797 ***** 2025-11-08 14:03:24.309936 | orchestrator | 2025-11-08 14:03:24.309953 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-11-08 14:03:24.309969 | orchestrator | Saturday 08 November 2025 14:02:48 +0000 (0:00:00.067) 0:01:20.864 ***** 2025-11-08 14:03:24.309988 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:24.310005 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:03:24.310088 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:03:24.310111 | orchestrator | 2025-11-08 14:03:24.310129 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-11-08 14:03:24.310156 | orchestrator | Saturday 08 November 2025 14:03:07 +0000 (0:00:18.872) 0:01:39.737 ***** 2025-11-08 14:03:24.310167 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:03:24.310177 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:03:24.310186 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:03:24.310196 | orchestrator | 2025-11-08 14:03:24.310206 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:03:24.310216 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-08 14:03:24.310227 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 14:03:24.310237 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 14:03:24.310246 | orchestrator | 2025-11-08 14:03:24.310256 | orchestrator | 2025-11-08 14:03:24.310266 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:03:24.310275 | orchestrator | Saturday 08 November 2025 14:03:22 +0000 (0:00:15.266) 0:01:55.004 ***** 2025-11-08 14:03:24.310285 | orchestrator | =============================================================================== 2025-11-08 14:03:24.310295 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.87s 2025-11-08 14:03:24.310304 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.41s 2025-11-08 14:03:24.310314 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.27s 2025-11-08 14:03:24.310323 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.32s 2025-11-08 14:03:24.310333 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.85s 2025-11-08 14:03:24.310343 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.11s 2025-11-08 14:03:24.310352 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.94s 2025-11-08 14:03:24.310362 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.86s 2025-11-08 14:03:24.310372 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.75s 2025-11-08 14:03:24.310381 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.54s 2025-11-08 14:03:24.310391 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.53s 2025-11-08 14:03:24.310400 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.36s 2025-11-08 14:03:24.310410 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.28s 2025-11-08 14:03:24.310420 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.49s 2025-11-08 14:03:24.310429 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.43s 2025-11-08 14:03:24.310438 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.36s 2025-11-08 14:03:24.310448 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.26s 2025-11-08 14:03:24.310464 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.21s 2025-11-08 14:03:24.310484 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.09s 2025-11-08 14:03:24.310494 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.91s 2025-11-08 14:03:24.310504 | orchestrator | 2025-11-08 14:03:24 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:24.310514 | orchestrator | 2025-11-08 14:03:24 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:24.310524 | orchestrator | 2025-11-08 14:03:24 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:27.355725 | orchestrator | 2025-11-08 14:03:27 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:27.357061 | orchestrator | 2025-11-08 14:03:27 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:27.359300 | orchestrator | 2025-11-08 14:03:27 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:27.361620 | orchestrator | 2025-11-08 14:03:27 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:27.361650 | orchestrator | 2025-11-08 14:03:27 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:30.405918 | orchestrator | 2025-11-08 14:03:30 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:30.408177 | orchestrator | 2025-11-08 14:03:30 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:30.410950 | orchestrator | 2025-11-08 14:03:30 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:30.413803 | orchestrator | 2025-11-08 14:03:30 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:30.414208 | orchestrator | 2025-11-08 14:03:30 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:33.463353 | orchestrator | 2025-11-08 14:03:33 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:33.465129 | orchestrator | 2025-11-08 14:03:33 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:33.469530 | orchestrator | 2025-11-08 14:03:33 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:33.471294 | orchestrator | 2025-11-08 14:03:33 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:33.471342 | orchestrator | 2025-11-08 14:03:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:36.519644 | orchestrator | 2025-11-08 14:03:36 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:36.520672 | orchestrator | 2025-11-08 14:03:36 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:36.522124 | orchestrator | 2025-11-08 14:03:36 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:36.524712 | orchestrator | 2025-11-08 14:03:36 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:36.524733 | orchestrator | 2025-11-08 14:03:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:39.571864 | orchestrator | 2025-11-08 14:03:39 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:39.572299 | orchestrator | 2025-11-08 14:03:39 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:39.574897 | orchestrator | 2025-11-08 14:03:39 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:39.575846 | orchestrator | 2025-11-08 14:03:39 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:39.576246 | orchestrator | 2025-11-08 14:03:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:42.622333 | orchestrator | 2025-11-08 14:03:42 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:42.623343 | orchestrator | 2025-11-08 14:03:42 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:42.626427 | orchestrator | 2025-11-08 14:03:42 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:42.627953 | orchestrator | 2025-11-08 14:03:42 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:42.627996 | orchestrator | 2025-11-08 14:03:42 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:45.660906 | orchestrator | 2025-11-08 14:03:45 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:45.661302 | orchestrator | 2025-11-08 14:03:45 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:45.662908 | orchestrator | 2025-11-08 14:03:45 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:45.663275 | orchestrator | 2025-11-08 14:03:45 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:45.663412 | orchestrator | 2025-11-08 14:03:45 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:48.699112 | orchestrator | 2025-11-08 14:03:48 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:48.700735 | orchestrator | 2025-11-08 14:03:48 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:48.701346 | orchestrator | 2025-11-08 14:03:48 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:48.703881 | orchestrator | 2025-11-08 14:03:48 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:48.703955 | orchestrator | 2025-11-08 14:03:48 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:51.741059 | orchestrator | 2025-11-08 14:03:51 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:51.741375 | orchestrator | 2025-11-08 14:03:51 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:51.742320 | orchestrator | 2025-11-08 14:03:51 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:51.743466 | orchestrator | 2025-11-08 14:03:51 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:51.743542 | orchestrator | 2025-11-08 14:03:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:54.777860 | orchestrator | 2025-11-08 14:03:54 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:54.780144 | orchestrator | 2025-11-08 14:03:54 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:54.781646 | orchestrator | 2025-11-08 14:03:54 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:54.782782 | orchestrator | 2025-11-08 14:03:54 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:54.783159 | orchestrator | 2025-11-08 14:03:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:03:57.825564 | orchestrator | 2025-11-08 14:03:57 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:03:57.825810 | orchestrator | 2025-11-08 14:03:57 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:03:57.827129 | orchestrator | 2025-11-08 14:03:57 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:03:57.828102 | orchestrator | 2025-11-08 14:03:57 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:03:57.828112 | orchestrator | 2025-11-08 14:03:57 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:00.871468 | orchestrator | 2025-11-08 14:04:00 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:00.871708 | orchestrator | 2025-11-08 14:04:00 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:00.872337 | orchestrator | 2025-11-08 14:04:00 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:00.874356 | orchestrator | 2025-11-08 14:04:00 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:00.874408 | orchestrator | 2025-11-08 14:04:00 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:03.900901 | orchestrator | 2025-11-08 14:04:03 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:03.903078 | orchestrator | 2025-11-08 14:04:03 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:03.903691 | orchestrator | 2025-11-08 14:04:03 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:03.904197 | orchestrator | 2025-11-08 14:04:03 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:03.904229 | orchestrator | 2025-11-08 14:04:03 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:06.937528 | orchestrator | 2025-11-08 14:04:06 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:06.938851 | orchestrator | 2025-11-08 14:04:06 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:06.938897 | orchestrator | 2025-11-08 14:04:06 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:06.939808 | orchestrator | 2025-11-08 14:04:06 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:06.939847 | orchestrator | 2025-11-08 14:04:06 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:09.966136 | orchestrator | 2025-11-08 14:04:09 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:09.966421 | orchestrator | 2025-11-08 14:04:09 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:09.967066 | orchestrator | 2025-11-08 14:04:09 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:09.968193 | orchestrator | 2025-11-08 14:04:09 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:09.968208 | orchestrator | 2025-11-08 14:04:09 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:13.013630 | orchestrator | 2025-11-08 14:04:13 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:13.016046 | orchestrator | 2025-11-08 14:04:13 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:13.016113 | orchestrator | 2025-11-08 14:04:13 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:13.016125 | orchestrator | 2025-11-08 14:04:13 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:13.016136 | orchestrator | 2025-11-08 14:04:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:16.050436 | orchestrator | 2025-11-08 14:04:16 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:16.050617 | orchestrator | 2025-11-08 14:04:16 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:16.050638 | orchestrator | 2025-11-08 14:04:16 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:16.050650 | orchestrator | 2025-11-08 14:04:16 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:16.050661 | orchestrator | 2025-11-08 14:04:16 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:19.095861 | orchestrator | 2025-11-08 14:04:19 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:19.096306 | orchestrator | 2025-11-08 14:04:19 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:19.097137 | orchestrator | 2025-11-08 14:04:19 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:19.098126 | orchestrator | 2025-11-08 14:04:19 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:19.098154 | orchestrator | 2025-11-08 14:04:19 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:22.121679 | orchestrator | 2025-11-08 14:04:22 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:22.123140 | orchestrator | 2025-11-08 14:04:22 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:22.125789 | orchestrator | 2025-11-08 14:04:22 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:22.125805 | orchestrator | 2025-11-08 14:04:22 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:22.125811 | orchestrator | 2025-11-08 14:04:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:25.156943 | orchestrator | 2025-11-08 14:04:25 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:25.157043 | orchestrator | 2025-11-08 14:04:25 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:25.157854 | orchestrator | 2025-11-08 14:04:25 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:25.160011 | orchestrator | 2025-11-08 14:04:25 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:25.160043 | orchestrator | 2025-11-08 14:04:25 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:28.193221 | orchestrator | 2025-11-08 14:04:28 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:28.194970 | orchestrator | 2025-11-08 14:04:28 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:28.197810 | orchestrator | 2025-11-08 14:04:28 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:28.201633 | orchestrator | 2025-11-08 14:04:28 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:28.201914 | orchestrator | 2025-11-08 14:04:28 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:31.237104 | orchestrator | 2025-11-08 14:04:31 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:31.237184 | orchestrator | 2025-11-08 14:04:31 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:31.238602 | orchestrator | 2025-11-08 14:04:31 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:31.239094 | orchestrator | 2025-11-08 14:04:31 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:31.239181 | orchestrator | 2025-11-08 14:04:31 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:34.269600 | orchestrator | 2025-11-08 14:04:34 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:34.271831 | orchestrator | 2025-11-08 14:04:34 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:34.272866 | orchestrator | 2025-11-08 14:04:34 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:34.274134 | orchestrator | 2025-11-08 14:04:34 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:34.274736 | orchestrator | 2025-11-08 14:04:34 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:37.312686 | orchestrator | 2025-11-08 14:04:37 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:37.314325 | orchestrator | 2025-11-08 14:04:37 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:37.316834 | orchestrator | 2025-11-08 14:04:37 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:37.317461 | orchestrator | 2025-11-08 14:04:37 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:37.317485 | orchestrator | 2025-11-08 14:04:37 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:40.375595 | orchestrator | 2025-11-08 14:04:40 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:40.376297 | orchestrator | 2025-11-08 14:04:40 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:40.377458 | orchestrator | 2025-11-08 14:04:40 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:40.379123 | orchestrator | 2025-11-08 14:04:40 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:40.379166 | orchestrator | 2025-11-08 14:04:40 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:43.418568 | orchestrator | 2025-11-08 14:04:43 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:43.420060 | orchestrator | 2025-11-08 14:04:43 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:43.423540 | orchestrator | 2025-11-08 14:04:43 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:43.425636 | orchestrator | 2025-11-08 14:04:43 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:43.425875 | orchestrator | 2025-11-08 14:04:43 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:46.479148 | orchestrator | 2025-11-08 14:04:46 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:46.483329 | orchestrator | 2025-11-08 14:04:46 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:46.485781 | orchestrator | 2025-11-08 14:04:46 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:46.493259 | orchestrator | 2025-11-08 14:04:46 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:46.494463 | orchestrator | 2025-11-08 14:04:46 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:49.538734 | orchestrator | 2025-11-08 14:04:49 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:49.540283 | orchestrator | 2025-11-08 14:04:49 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:49.542908 | orchestrator | 2025-11-08 14:04:49 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:49.543788 | orchestrator | 2025-11-08 14:04:49 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:49.543809 | orchestrator | 2025-11-08 14:04:49 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:52.586241 | orchestrator | 2025-11-08 14:04:52 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:52.586376 | orchestrator | 2025-11-08 14:04:52 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:52.587681 | orchestrator | 2025-11-08 14:04:52 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:52.588830 | orchestrator | 2025-11-08 14:04:52 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:52.588884 | orchestrator | 2025-11-08 14:04:52 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:55.636957 | orchestrator | 2025-11-08 14:04:55 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:55.638755 | orchestrator | 2025-11-08 14:04:55 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:55.641408 | orchestrator | 2025-11-08 14:04:55 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:55.644375 | orchestrator | 2025-11-08 14:04:55 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:55.644815 | orchestrator | 2025-11-08 14:04:55 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:04:58.689632 | orchestrator | 2025-11-08 14:04:58 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:04:58.691078 | orchestrator | 2025-11-08 14:04:58 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:04:58.693882 | orchestrator | 2025-11-08 14:04:58 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:04:58.696136 | orchestrator | 2025-11-08 14:04:58 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:04:58.696184 | orchestrator | 2025-11-08 14:04:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:01.740022 | orchestrator | 2025-11-08 14:05:01 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:01.740995 | orchestrator | 2025-11-08 14:05:01 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:01.741677 | orchestrator | 2025-11-08 14:05:01 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:01.742155 | orchestrator | 2025-11-08 14:05:01 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:01.742194 | orchestrator | 2025-11-08 14:05:01 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:04.802921 | orchestrator | 2025-11-08 14:05:04 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:04.803866 | orchestrator | 2025-11-08 14:05:04 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:04.804347 | orchestrator | 2025-11-08 14:05:04 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:04.805274 | orchestrator | 2025-11-08 14:05:04 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:04.805575 | orchestrator | 2025-11-08 14:05:04 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:07.854418 | orchestrator | 2025-11-08 14:05:07 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:07.854600 | orchestrator | 2025-11-08 14:05:07 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:07.855755 | orchestrator | 2025-11-08 14:05:07 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:07.857094 | orchestrator | 2025-11-08 14:05:07 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:07.857126 | orchestrator | 2025-11-08 14:05:07 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:10.896904 | orchestrator | 2025-11-08 14:05:10 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:10.899512 | orchestrator | 2025-11-08 14:05:10 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:10.901964 | orchestrator | 2025-11-08 14:05:10 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:10.903188 | orchestrator | 2025-11-08 14:05:10 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:10.903215 | orchestrator | 2025-11-08 14:05:10 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:13.944539 | orchestrator | 2025-11-08 14:05:13 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:13.946169 | orchestrator | 2025-11-08 14:05:13 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:13.949466 | orchestrator | 2025-11-08 14:05:13 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:13.952043 | orchestrator | 2025-11-08 14:05:13 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:13.952082 | orchestrator | 2025-11-08 14:05:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:16.997896 | orchestrator | 2025-11-08 14:05:16 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:17.002289 | orchestrator | 2025-11-08 14:05:17 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:17.004105 | orchestrator | 2025-11-08 14:05:17 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:17.006085 | orchestrator | 2025-11-08 14:05:17 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:17.006167 | orchestrator | 2025-11-08 14:05:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:20.047995 | orchestrator | 2025-11-08 14:05:20 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:20.049975 | orchestrator | 2025-11-08 14:05:20 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:20.053308 | orchestrator | 2025-11-08 14:05:20 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:20.054592 | orchestrator | 2025-11-08 14:05:20 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:20.054631 | orchestrator | 2025-11-08 14:05:20 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:23.094975 | orchestrator | 2025-11-08 14:05:23 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:23.096902 | orchestrator | 2025-11-08 14:05:23 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:23.099347 | orchestrator | 2025-11-08 14:05:23 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:23.101053 | orchestrator | 2025-11-08 14:05:23 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:23.101097 | orchestrator | 2025-11-08 14:05:23 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:26.157576 | orchestrator | 2025-11-08 14:05:26 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:26.158465 | orchestrator | 2025-11-08 14:05:26 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:26.158781 | orchestrator | 2025-11-08 14:05:26 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:26.160223 | orchestrator | 2025-11-08 14:05:26 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:26.160247 | orchestrator | 2025-11-08 14:05:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:29.190177 | orchestrator | 2025-11-08 14:05:29 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:29.190621 | orchestrator | 2025-11-08 14:05:29 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:29.191392 | orchestrator | 2025-11-08 14:05:29 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:29.191864 | orchestrator | 2025-11-08 14:05:29 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:29.191892 | orchestrator | 2025-11-08 14:05:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:32.313559 | orchestrator | 2025-11-08 14:05:32 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:32.313908 | orchestrator | 2025-11-08 14:05:32 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:32.315205 | orchestrator | 2025-11-08 14:05:32 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:32.316081 | orchestrator | 2025-11-08 14:05:32 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:32.316120 | orchestrator | 2025-11-08 14:05:32 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:35.352296 | orchestrator | 2025-11-08 14:05:35 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:35.353820 | orchestrator | 2025-11-08 14:05:35 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:35.354568 | orchestrator | 2025-11-08 14:05:35 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:35.355470 | orchestrator | 2025-11-08 14:05:35 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:35.355486 | orchestrator | 2025-11-08 14:05:35 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:38.390699 | orchestrator | 2025-11-08 14:05:38 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:38.391709 | orchestrator | 2025-11-08 14:05:38 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:38.392707 | orchestrator | 2025-11-08 14:05:38 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:38.394146 | orchestrator | 2025-11-08 14:05:38 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:38.394183 | orchestrator | 2025-11-08 14:05:38 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:41.440248 | orchestrator | 2025-11-08 14:05:41 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:41.440544 | orchestrator | 2025-11-08 14:05:41 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:41.441712 | orchestrator | 2025-11-08 14:05:41 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:41.444769 | orchestrator | 2025-11-08 14:05:41 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:41.444826 | orchestrator | 2025-11-08 14:05:41 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:44.497257 | orchestrator | 2025-11-08 14:05:44 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:44.499680 | orchestrator | 2025-11-08 14:05:44 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:44.503960 | orchestrator | 2025-11-08 14:05:44 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:44.505993 | orchestrator | 2025-11-08 14:05:44 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:44.506258 | orchestrator | 2025-11-08 14:05:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:47.551605 | orchestrator | 2025-11-08 14:05:47 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:47.552935 | orchestrator | 2025-11-08 14:05:47 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:47.555013 | orchestrator | 2025-11-08 14:05:47 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:47.556713 | orchestrator | 2025-11-08 14:05:47 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:47.556783 | orchestrator | 2025-11-08 14:05:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:50.604679 | orchestrator | 2025-11-08 14:05:50 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:50.605870 | orchestrator | 2025-11-08 14:05:50 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:50.609276 | orchestrator | 2025-11-08 14:05:50 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:50.611015 | orchestrator | 2025-11-08 14:05:50 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:50.611747 | orchestrator | 2025-11-08 14:05:50 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:53.657558 | orchestrator | 2025-11-08 14:05:53 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:53.659869 | orchestrator | 2025-11-08 14:05:53 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:53.662092 | orchestrator | 2025-11-08 14:05:53 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:53.664759 | orchestrator | 2025-11-08 14:05:53 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:53.664833 | orchestrator | 2025-11-08 14:05:53 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:56.715041 | orchestrator | 2025-11-08 14:05:56 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:56.716228 | orchestrator | 2025-11-08 14:05:56 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:56.718618 | orchestrator | 2025-11-08 14:05:56 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:56.720588 | orchestrator | 2025-11-08 14:05:56 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:56.720622 | orchestrator | 2025-11-08 14:05:56 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:05:59.760279 | orchestrator | 2025-11-08 14:05:59 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:05:59.764863 | orchestrator | 2025-11-08 14:05:59 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:05:59.767162 | orchestrator | 2025-11-08 14:05:59 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:05:59.768714 | orchestrator | 2025-11-08 14:05:59 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:05:59.768767 | orchestrator | 2025-11-08 14:05:59 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:02.822101 | orchestrator | 2025-11-08 14:06:02 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:02.825098 | orchestrator | 2025-11-08 14:06:02 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:06:02.826979 | orchestrator | 2025-11-08 14:06:02 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:06:02.830537 | orchestrator | 2025-11-08 14:06:02 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:06:02.830660 | orchestrator | 2025-11-08 14:06:02 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:05.882164 | orchestrator | 2025-11-08 14:06:05 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:05.885803 | orchestrator | 2025-11-08 14:06:05 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:06:05.888026 | orchestrator | 2025-11-08 14:06:05 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:06:05.890073 | orchestrator | 2025-11-08 14:06:05 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state STARTED 2025-11-08 14:06:05.890109 | orchestrator | 2025-11-08 14:06:05 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:08.940624 | orchestrator | 2025-11-08 14:06:08 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:08.943528 | orchestrator | 2025-11-08 14:06:08 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:06:08.947046 | orchestrator | 2025-11-08 14:06:08 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:06:08.953119 | orchestrator | 2025-11-08 14:06:08 | INFO  | Task 588e1e60-272a-48d8-a2a1-8edd5e102ba7 is in state SUCCESS 2025-11-08 14:06:08.953204 | orchestrator | 2025-11-08 14:06:08 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:08.954905 | orchestrator | 2025-11-08 14:06:08.954961 | orchestrator | 2025-11-08 14:06:08.954976 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:06:08.954989 | orchestrator | 2025-11-08 14:06:08.955001 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:06:08.955015 | orchestrator | Saturday 08 November 2025 14:03:11 +0000 (0:00:00.261) 0:00:00.261 ***** 2025-11-08 14:06:08.955028 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:06:08.955041 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:06:08.955052 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:06:08.955064 | orchestrator | 2025-11-08 14:06:08.955075 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:06:08.955087 | orchestrator | Saturday 08 November 2025 14:03:11 +0000 (0:00:00.331) 0:00:00.593 ***** 2025-11-08 14:06:08.955099 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-11-08 14:06:08.955112 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-11-08 14:06:08.955124 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-11-08 14:06:08.955136 | orchestrator | 2025-11-08 14:06:08.955148 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-11-08 14:06:08.955160 | orchestrator | 2025-11-08 14:06:08.955173 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-08 14:06:08.955239 | orchestrator | Saturday 08 November 2025 14:03:12 +0000 (0:00:00.620) 0:00:01.213 ***** 2025-11-08 14:06:08.955253 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:06:08.955266 | orchestrator | 2025-11-08 14:06:08.955278 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-11-08 14:06:08.955305 | orchestrator | Saturday 08 November 2025 14:03:13 +0000 (0:00:00.578) 0:00:01.791 ***** 2025-11-08 14:06:08.955318 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-11-08 14:06:08.955330 | orchestrator | 2025-11-08 14:06:08.955342 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-11-08 14:06:08.955423 | orchestrator | Saturday 08 November 2025 14:03:16 +0000 (0:00:03.680) 0:00:05.472 ***** 2025-11-08 14:06:08.955436 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-11-08 14:06:08.955448 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-11-08 14:06:08.955460 | orchestrator | 2025-11-08 14:06:08.955471 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-11-08 14:06:08.955482 | orchestrator | Saturday 08 November 2025 14:03:23 +0000 (0:00:06.961) 0:00:12.433 ***** 2025-11-08 14:06:08.955494 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-08 14:06:08.955507 | orchestrator | 2025-11-08 14:06:08.955519 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-11-08 14:06:08.955533 | orchestrator | Saturday 08 November 2025 14:03:27 +0000 (0:00:03.370) 0:00:15.804 ***** 2025-11-08 14:06:08.955546 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-08 14:06:08.955558 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-11-08 14:06:08.955570 | orchestrator | 2025-11-08 14:06:08.955578 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-11-08 14:06:08.955587 | orchestrator | Saturday 08 November 2025 14:03:31 +0000 (0:00:03.888) 0:00:19.692 ***** 2025-11-08 14:06:08.955595 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-08 14:06:08.955603 | orchestrator | 2025-11-08 14:06:08.955615 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-11-08 14:06:08.955628 | orchestrator | Saturday 08 November 2025 14:03:34 +0000 (0:00:03.505) 0:00:23.197 ***** 2025-11-08 14:06:08.955640 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-11-08 14:06:08.955652 | orchestrator | 2025-11-08 14:06:08.955664 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-11-08 14:06:08.955676 | orchestrator | Saturday 08 November 2025 14:03:38 +0000 (0:00:03.836) 0:00:27.034 ***** 2025-11-08 14:06:08.955719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.955761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.955774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.955783 | orchestrator | 2025-11-08 14:06:08.955792 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-08 14:06:08.955806 | orchestrator | Saturday 08 November 2025 14:03:41 +0000 (0:00:03.465) 0:00:30.499 ***** 2025-11-08 14:06:08.955815 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:06:08.955823 | orchestrator | 2025-11-08 14:06:08.955838 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-11-08 14:06:08.955846 | orchestrator | Saturday 08 November 2025 14:03:42 +0000 (0:00:00.771) 0:00:31.271 ***** 2025-11-08 14:06:08.955855 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:08.955864 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:06:08.955873 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:06:08.955881 | orchestrator | 2025-11-08 14:06:08.955889 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-11-08 14:06:08.955896 | orchestrator | Saturday 08 November 2025 14:03:47 +0000 (0:00:05.018) 0:00:36.290 ***** 2025-11-08 14:06:08.955903 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-08 14:06:08.955910 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-08 14:06:08.955918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-08 14:06:08.955925 | orchestrator | 2025-11-08 14:06:08.955932 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-11-08 14:06:08.955939 | orchestrator | Saturday 08 November 2025 14:03:49 +0000 (0:00:01.731) 0:00:38.022 ***** 2025-11-08 14:06:08.955946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-08 14:06:08.955954 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-08 14:06:08.955965 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-08 14:06:08.955972 | orchestrator | 2025-11-08 14:06:08.955980 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-11-08 14:06:08.955987 | orchestrator | Saturday 08 November 2025 14:03:50 +0000 (0:00:01.334) 0:00:39.356 ***** 2025-11-08 14:06:08.955995 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:06:08.956002 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:06:08.956009 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:06:08.956016 | orchestrator | 2025-11-08 14:06:08.956023 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-11-08 14:06:08.956030 | orchestrator | Saturday 08 November 2025 14:03:51 +0000 (0:00:00.785) 0:00:40.142 ***** 2025-11-08 14:06:08.956037 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956045 | orchestrator | 2025-11-08 14:06:08.956052 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-11-08 14:06:08.956059 | orchestrator | Saturday 08 November 2025 14:03:51 +0000 (0:00:00.382) 0:00:40.524 ***** 2025-11-08 14:06:08.956066 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956073 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.956080 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.956088 | orchestrator | 2025-11-08 14:06:08.956095 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-08 14:06:08.956102 | orchestrator | Saturday 08 November 2025 14:03:52 +0000 (0:00:00.378) 0:00:40.903 ***** 2025-11-08 14:06:08.956109 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:06:08.956116 | orchestrator | 2025-11-08 14:06:08.956123 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-11-08 14:06:08.956130 | orchestrator | Saturday 08 November 2025 14:03:52 +0000 (0:00:00.708) 0:00:41.611 ***** 2025-11-08 14:06:08.956143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.956161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.956169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.956182 | orchestrator | 2025-11-08 14:06:08.956189 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-11-08 14:06:08.956196 | orchestrator | Saturday 08 November 2025 14:03:58 +0000 (0:00:05.042) 0:00:46.653 ***** 2025-11-08 14:06:08.956216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-08 14:06:08.956247 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-08 14:06:08.956268 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.956283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-08 14:06:08.956291 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.956299 | orchestrator | 2025-11-08 14:06:08.956306 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-11-08 14:06:08.956313 | orchestrator | Saturday 08 November 2025 14:04:01 +0000 (0:00:03.513) 0:00:50.167 ***** 2025-11-08 14:06:08.956324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-08 14:06:08.956337 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-08 14:06:08.956387 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.956406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-08 14:06:08.956414 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.956421 | orchestrator | 2025-11-08 14:06:08.956434 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-11-08 14:06:08.956441 | orchestrator | Saturday 08 November 2025 14:04:04 +0000 (0:00:03.317) 0:00:53.485 ***** 2025-11-08 14:06:08.956448 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956455 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.956463 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.956470 | orchestrator | 2025-11-08 14:06:08.956477 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-11-08 14:06:08.956484 | orchestrator | Saturday 08 November 2025 14:04:08 +0000 (0:00:03.785) 0:00:57.270 ***** 2025-11-08 14:06:08.956496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.956509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.956522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.956530 | orchestrator | 2025-11-08 14:06:08.956537 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-11-08 14:06:08.956544 | orchestrator | Saturday 08 November 2025 14:04:13 +0000 (0:00:05.267) 0:01:02.538 ***** 2025-11-08 14:06:08.956551 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:06:08.956558 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:08.956565 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:06:08.956572 | orchestrator | 2025-11-08 14:06:08.956579 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-11-08 14:06:08.956587 | orchestrator | Saturday 08 November 2025 14:04:24 +0000 (0:00:10.387) 0:01:12.926 ***** 2025-11-08 14:06:08.956594 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.956601 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956608 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.956615 | orchestrator | 2025-11-08 14:06:08.956622 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-11-08 14:06:08.956634 | orchestrator | Saturday 08 November 2025 14:04:28 +0000 (0:00:04.165) 0:01:17.091 ***** 2025-11-08 14:06:08.956642 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.956649 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.956656 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956663 | orchestrator | 2025-11-08 14:06:08.956670 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-11-08 14:06:08.956677 | orchestrator | Saturday 08 November 2025 14:04:32 +0000 (0:00:04.386) 0:01:21.477 ***** 2025-11-08 14:06:08.956686 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956698 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.956710 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.956721 | orchestrator | 2025-11-08 14:06:08.956733 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-11-08 14:06:08.956744 | orchestrator | Saturday 08 November 2025 14:04:36 +0000 (0:00:03.261) 0:01:24.739 ***** 2025-11-08 14:06:08.956755 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956766 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.956777 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.956790 | orchestrator | 2025-11-08 14:06:08.956801 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-11-08 14:06:08.956822 | orchestrator | Saturday 08 November 2025 14:04:39 +0000 (0:00:03.702) 0:01:28.441 ***** 2025-11-08 14:06:08.956834 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956846 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.956857 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.956864 | orchestrator | 2025-11-08 14:06:08.956871 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-11-08 14:06:08.956883 | orchestrator | Saturday 08 November 2025 14:04:40 +0000 (0:00:00.349) 0:01:28.791 ***** 2025-11-08 14:06:08.956890 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-08 14:06:08.956898 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.956905 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-08 14:06:08.956912 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.956919 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-08 14:06:08.956926 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.956933 | orchestrator | 2025-11-08 14:06:08.956940 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-11-08 14:06:08.956947 | orchestrator | Saturday 08 November 2025 14:04:43 +0000 (0:00:03.661) 0:01:32.453 ***** 2025-11-08 14:06:08.956955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.956972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.956989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-08 14:06:08.956998 | orchestrator | 2025-11-08 14:06:08.957005 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-08 14:06:08.957012 | orchestrator | Saturday 08 November 2025 14:04:47 +0000 (0:00:04.065) 0:01:36.519 ***** 2025-11-08 14:06:08.957019 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:08.957026 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:08.957033 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:08.957040 | orchestrator | 2025-11-08 14:06:08.957047 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-11-08 14:06:08.957054 | orchestrator | Saturday 08 November 2025 14:04:48 +0000 (0:00:00.301) 0:01:36.820 ***** 2025-11-08 14:06:08.957062 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:08.957069 | orchestrator | 2025-11-08 14:06:08.957076 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-11-08 14:06:08.957083 | orchestrator | Saturday 08 November 2025 14:04:50 +0000 (0:00:02.208) 0:01:39.029 ***** 2025-11-08 14:06:08.957090 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:08.957097 | orchestrator | 2025-11-08 14:06:08.957104 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-11-08 14:06:08.957112 | orchestrator | Saturday 08 November 2025 14:04:52 +0000 (0:00:02.282) 0:01:41.312 ***** 2025-11-08 14:06:08.957119 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:08.957126 | orchestrator | 2025-11-08 14:06:08.957133 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-11-08 14:06:08.957146 | orchestrator | Saturday 08 November 2025 14:04:54 +0000 (0:00:02.273) 0:01:43.585 ***** 2025-11-08 14:06:08.957154 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:08.957161 | orchestrator | 2025-11-08 14:06:08.957168 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-11-08 14:06:08.957175 | orchestrator | Saturday 08 November 2025 14:05:23 +0000 (0:00:28.979) 0:02:12.564 ***** 2025-11-08 14:06:08.957182 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:08.957189 | orchestrator | 2025-11-08 14:06:08.957200 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-08 14:06:08.957208 | orchestrator | Saturday 08 November 2025 14:05:26 +0000 (0:00:02.157) 0:02:14.722 ***** 2025-11-08 14:06:08.957215 | orchestrator | 2025-11-08 14:06:08.957222 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-08 14:06:08.957229 | orchestrator | Saturday 08 November 2025 14:05:26 +0000 (0:00:00.072) 0:02:14.795 ***** 2025-11-08 14:06:08.957237 | orchestrator | 2025-11-08 14:06:08.957244 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-08 14:06:08.957251 | orchestrator | Saturday 08 November 2025 14:05:26 +0000 (0:00:00.098) 0:02:14.893 ***** 2025-11-08 14:06:08.957258 | orchestrator | 2025-11-08 14:06:08.957265 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-11-08 14:06:08.957272 | orchestrator | Saturday 08 November 2025 14:05:26 +0000 (0:00:00.127) 0:02:15.021 ***** 2025-11-08 14:06:08.957279 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:08.957286 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:06:08.957294 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:06:08.957301 | orchestrator | 2025-11-08 14:06:08.957308 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:06:08.957316 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-08 14:06:08.957324 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-08 14:06:08.957334 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-08 14:06:08.957342 | orchestrator | 2025-11-08 14:06:08.957366 | orchestrator | 2025-11-08 14:06:08.957373 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:06:08.957380 | orchestrator | Saturday 08 November 2025 14:06:07 +0000 (0:00:41.391) 0:02:56.413 ***** 2025-11-08 14:06:08.957388 | orchestrator | =============================================================================== 2025-11-08 14:06:08.957395 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.39s 2025-11-08 14:06:08.957402 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.98s 2025-11-08 14:06:08.957409 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 10.39s 2025-11-08 14:06:08.957416 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.96s 2025-11-08 14:06:08.957423 | orchestrator | glance : Copying over config.json files for services -------------------- 5.27s 2025-11-08 14:06:08.957430 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.04s 2025-11-08 14:06:08.957437 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.02s 2025-11-08 14:06:08.957444 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.39s 2025-11-08 14:06:08.957452 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.17s 2025-11-08 14:06:08.957468 | orchestrator | glance : Check glance containers ---------------------------------------- 4.07s 2025-11-08 14:06:08.957475 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.89s 2025-11-08 14:06:08.957491 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.84s 2025-11-08 14:06:08.957503 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.79s 2025-11-08 14:06:08.957511 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.70s 2025-11-08 14:06:08.957518 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.68s 2025-11-08 14:06:08.957525 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.66s 2025-11-08 14:06:08.957532 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.51s 2025-11-08 14:06:08.957539 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.51s 2025-11-08 14:06:08.957547 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.47s 2025-11-08 14:06:08.957554 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.37s 2025-11-08 14:06:11.993475 | orchestrator | 2025-11-08 14:06:11 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:11.995429 | orchestrator | 2025-11-08 14:06:11 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:06:11.997857 | orchestrator | 2025-11-08 14:06:11 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:06:11.999831 | orchestrator | 2025-11-08 14:06:12 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:12.000160 | orchestrator | 2025-11-08 14:06:12 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:15.047056 | orchestrator | 2025-11-08 14:06:15 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:15.048442 | orchestrator | 2025-11-08 14:06:15 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:06:15.049490 | orchestrator | 2025-11-08 14:06:15 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:06:15.050801 | orchestrator | 2025-11-08 14:06:15 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:15.050830 | orchestrator | 2025-11-08 14:06:15 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:18.080887 | orchestrator | 2025-11-08 14:06:18 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:18.082699 | orchestrator | 2025-11-08 14:06:18 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:06:18.084142 | orchestrator | 2025-11-08 14:06:18 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:06:18.086185 | orchestrator | 2025-11-08 14:06:18 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:18.086270 | orchestrator | 2025-11-08 14:06:18 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:21.118306 | orchestrator | 2025-11-08 14:06:21 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:21.120185 | orchestrator | 2025-11-08 14:06:21 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:06:21.121685 | orchestrator | 2025-11-08 14:06:21 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:06:21.123186 | orchestrator | 2025-11-08 14:06:21 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:21.123307 | orchestrator | 2025-11-08 14:06:21 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:24.155393 | orchestrator | 2025-11-08 14:06:24 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:24.155815 | orchestrator | 2025-11-08 14:06:24 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:06:24.157813 | orchestrator | 2025-11-08 14:06:24 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:06:24.159463 | orchestrator | 2025-11-08 14:06:24 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:24.159514 | orchestrator | 2025-11-08 14:06:24 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:27.196985 | orchestrator | 2025-11-08 14:06:27 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:27.197833 | orchestrator | 2025-11-08 14:06:27 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:06:27.201984 | orchestrator | 2025-11-08 14:06:27 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:06:27.203701 | orchestrator | 2025-11-08 14:06:27 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:27.203742 | orchestrator | 2025-11-08 14:06:27 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:30.250736 | orchestrator | 2025-11-08 14:06:30 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:30.250865 | orchestrator | 2025-11-08 14:06:30 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state STARTED 2025-11-08 14:06:30.250883 | orchestrator | 2025-11-08 14:06:30 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state STARTED 2025-11-08 14:06:30.252688 | orchestrator | 2025-11-08 14:06:30 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:30.252744 | orchestrator | 2025-11-08 14:06:30 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:33.293436 | orchestrator | 2025-11-08 14:06:33 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:33.294495 | orchestrator | 2025-11-08 14:06:33 | INFO  | Task b35a8372-9890-4959-867b-af480b7641d7 is in state SUCCESS 2025-11-08 14:06:33.295394 | orchestrator | 2025-11-08 14:06:33 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:06:33.297665 | orchestrator | 2025-11-08 14:06:33 | INFO  | Task 77b28507-4155-427a-98a3-1a0538d44a40 is in state SUCCESS 2025-11-08 14:06:33.299022 | orchestrator | 2025-11-08 14:06:33.299066 | orchestrator | 2025-11-08 14:06:33.299075 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-11-08 14:06:33.299084 | orchestrator | 2025-11-08 14:06:33.299090 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-11-08 14:06:33.299098 | orchestrator | Saturday 08 November 2025 14:00:19 +0000 (0:00:00.324) 0:00:00.324 ***** 2025-11-08 14:06:33.299105 | orchestrator | changed: [localhost] 2025-11-08 14:06:33.299113 | orchestrator | 2025-11-08 14:06:33.299120 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-11-08 14:06:33.299126 | orchestrator | Saturday 08 November 2025 14:00:20 +0000 (0:00:00.918) 0:00:01.242 ***** 2025-11-08 14:06:33.299133 | orchestrator | 2025-11-08 14:06:33.299139 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-08 14:06:33.299146 | orchestrator | 2025-11-08 14:06:33.299153 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-08 14:06:33.299159 | orchestrator | 2025-11-08 14:06:33.299166 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-08 14:06:33.299172 | orchestrator | 2025-11-08 14:06:33.299179 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-08 14:06:33.299186 | orchestrator | 2025-11-08 14:06:33.299192 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-08 14:06:33.299198 | orchestrator | 2025-11-08 14:06:33.299205 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-08 14:06:33.299211 | orchestrator | 2025-11-08 14:06:33.299218 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-08 14:06:33.299248 | orchestrator | changed: [localhost] 2025-11-08 14:06:33.299255 | orchestrator | 2025-11-08 14:06:33.299262 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-11-08 14:06:33.299269 | orchestrator | Saturday 08 November 2025 14:06:17 +0000 (0:05:57.130) 0:05:58.373 ***** 2025-11-08 14:06:33.299275 | orchestrator | changed: [localhost] 2025-11-08 14:06:33.299281 | orchestrator | 2025-11-08 14:06:33.299288 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:06:33.299295 | orchestrator | 2025-11-08 14:06:33.299302 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:06:33.299341 | orchestrator | Saturday 08 November 2025 14:06:30 +0000 (0:00:13.287) 0:06:11.661 ***** 2025-11-08 14:06:33.299349 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:06:33.299355 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:06:33.299362 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:06:33.299369 | orchestrator | 2025-11-08 14:06:33.299375 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:06:33.299394 | orchestrator | Saturday 08 November 2025 14:06:30 +0000 (0:00:00.312) 0:06:11.974 ***** 2025-11-08 14:06:33.299401 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-11-08 14:06:33.299409 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-11-08 14:06:33.299416 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-11-08 14:06:33.299422 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-11-08 14:06:33.299428 | orchestrator | 2025-11-08 14:06:33.299435 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-11-08 14:06:33.299441 | orchestrator | skipping: no hosts matched 2025-11-08 14:06:33.299449 | orchestrator | 2025-11-08 14:06:33.299455 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:06:33.299462 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:06:33.299472 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:06:33.299480 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:06:33.299487 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:06:33.299494 | orchestrator | 2025-11-08 14:06:33.299500 | orchestrator | 2025-11-08 14:06:33.299507 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:06:33.299514 | orchestrator | Saturday 08 November 2025 14:06:31 +0000 (0:00:00.806) 0:06:12.781 ***** 2025-11-08 14:06:33.299520 | orchestrator | =============================================================================== 2025-11-08 14:06:33.299527 | orchestrator | Download ironic-agent initramfs --------------------------------------- 357.13s 2025-11-08 14:06:33.299533 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.29s 2025-11-08 14:06:33.299540 | orchestrator | Ensure the destination directory exists --------------------------------- 0.92s 2025-11-08 14:06:33.299546 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2025-11-08 14:06:33.299553 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-11-08 14:06:33.299559 | orchestrator | 2025-11-08 14:06:33.299566 | orchestrator | 2025-11-08 14:06:33.299572 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:06:33.299579 | orchestrator | 2025-11-08 14:06:33.299585 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:06:33.299593 | orchestrator | Saturday 08 November 2025 14:03:13 +0000 (0:00:00.275) 0:00:00.275 ***** 2025-11-08 14:06:33.299599 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:06:33.299613 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:06:33.299621 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:06:33.299628 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:06:33.299635 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:06:33.299642 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:06:33.299649 | orchestrator | 2025-11-08 14:06:33.299657 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:06:33.299664 | orchestrator | Saturday 08 November 2025 14:03:13 +0000 (0:00:00.688) 0:00:00.963 ***** 2025-11-08 14:06:33.299683 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-11-08 14:06:33.299690 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-11-08 14:06:33.299696 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-11-08 14:06:33.299703 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-11-08 14:06:33.299710 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-11-08 14:06:33.299717 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-11-08 14:06:33.299724 | orchestrator | 2025-11-08 14:06:33.299732 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-11-08 14:06:33.299739 | orchestrator | 2025-11-08 14:06:33.299747 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-08 14:06:33.299754 | orchestrator | Saturday 08 November 2025 14:03:14 +0000 (0:00:00.594) 0:00:01.557 ***** 2025-11-08 14:06:33.299761 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 14:06:33.299768 | orchestrator | 2025-11-08 14:06:33.299877 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-11-08 14:06:33.299886 | orchestrator | Saturday 08 November 2025 14:03:15 +0000 (0:00:01.277) 0:00:02.835 ***** 2025-11-08 14:06:33.299892 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-11-08 14:06:33.299899 | orchestrator | 2025-11-08 14:06:33.299905 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-11-08 14:06:33.299912 | orchestrator | Saturday 08 November 2025 14:03:19 +0000 (0:00:03.559) 0:00:06.394 ***** 2025-11-08 14:06:33.299918 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-11-08 14:06:33.299924 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-11-08 14:06:33.299929 | orchestrator | 2025-11-08 14:06:33.299945 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-11-08 14:06:33.299951 | orchestrator | Saturday 08 November 2025 14:03:26 +0000 (0:00:07.107) 0:00:13.501 ***** 2025-11-08 14:06:33.299958 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-08 14:06:33.299965 | orchestrator | 2025-11-08 14:06:33.299972 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-11-08 14:06:33.299984 | orchestrator | Saturday 08 November 2025 14:03:29 +0000 (0:00:03.411) 0:00:16.913 ***** 2025-11-08 14:06:33.299990 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-08 14:06:33.299997 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-11-08 14:06:33.300004 | orchestrator | 2025-11-08 14:06:33.300010 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-11-08 14:06:33.300017 | orchestrator | Saturday 08 November 2025 14:03:33 +0000 (0:00:04.010) 0:00:20.923 ***** 2025-11-08 14:06:33.300024 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-08 14:06:33.300031 | orchestrator | 2025-11-08 14:06:33.300037 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-11-08 14:06:33.300044 | orchestrator | Saturday 08 November 2025 14:03:37 +0000 (0:00:03.476) 0:00:24.400 ***** 2025-11-08 14:06:33.300050 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-11-08 14:06:33.300057 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-11-08 14:06:33.300070 | orchestrator | 2025-11-08 14:06:33.300077 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-11-08 14:06:33.300084 | orchestrator | Saturday 08 November 2025 14:03:45 +0000 (0:00:08.240) 0:00:32.640 ***** 2025-11-08 14:06:33.300094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.300113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.300121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.300132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.300140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.300151 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.300164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.300172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.300179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.300191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.300206 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.300213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.300220 | orchestrator | 2025-11-08 14:06:33.300226 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-08 14:06:33.300234 | orchestrator | Saturday 08 November 2025 14:03:47 +0000 (0:00:02.530) 0:00:35.170 ***** 2025-11-08 14:06:33.300241 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.300247 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:33.300254 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:33.300260 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:06:33.300267 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:06:33.300274 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:06:33.300281 | orchestrator | 2025-11-08 14:06:33.300287 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-08 14:06:33.300294 | orchestrator | Saturday 08 November 2025 14:03:48 +0000 (0:00:00.576) 0:00:35.747 ***** 2025-11-08 14:06:33.300300 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.300307 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:33.300335 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:33.300345 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 14:06:33.300352 | orchestrator | 2025-11-08 14:06:33.300358 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-11-08 14:06:33.300364 | orchestrator | Saturday 08 November 2025 14:03:49 +0000 (0:00:01.032) 0:00:36.779 ***** 2025-11-08 14:06:33.300369 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-11-08 14:06:33.300375 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-11-08 14:06:33.300380 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-11-08 14:06:33.300386 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-11-08 14:06:33.300392 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-11-08 14:06:33.300398 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-11-08 14:06:33.300404 | orchestrator | 2025-11-08 14:06:33.300411 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-11-08 14:06:33.300417 | orchestrator | Saturday 08 November 2025 14:03:51 +0000 (0:00:01.798) 0:00:38.577 ***** 2025-11-08 14:06:33.300429 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-08 14:06:33.300443 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-08 14:06:33.300451 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-08 14:06:33.300457 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-08 14:06:33.300468 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-08 14:06:33.300474 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-08 14:06:33.300487 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-08 14:06:33.300496 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-08 14:06:33.300506 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-08 14:06:33.300513 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-08 14:06:33.300531 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-08 14:06:33.300542 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-08 14:06:33.300551 | orchestrator | 2025-11-08 14:06:33.300559 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-11-08 14:06:33.300567 | orchestrator | Saturday 08 November 2025 14:03:54 +0000 (0:00:03.618) 0:00:42.195 ***** 2025-11-08 14:06:33.300574 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-08 14:06:33.300582 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-08 14:06:33.300588 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-08 14:06:33.300593 | orchestrator | 2025-11-08 14:06:33.300599 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-11-08 14:06:33.300605 | orchestrator | Saturday 08 November 2025 14:03:57 +0000 (0:00:02.495) 0:00:44.691 ***** 2025-11-08 14:06:33.300611 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-11-08 14:06:33.300616 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-11-08 14:06:33.300622 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-11-08 14:06:33.300628 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-11-08 14:06:33.300634 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-11-08 14:06:33.300640 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-11-08 14:06:33.300646 | orchestrator | 2025-11-08 14:06:33.300652 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-11-08 14:06:33.300658 | orchestrator | Saturday 08 November 2025 14:04:00 +0000 (0:00:02.835) 0:00:47.526 ***** 2025-11-08 14:06:33.300664 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-11-08 14:06:33.300670 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-11-08 14:06:33.300676 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-11-08 14:06:33.300682 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-11-08 14:06:33.300687 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-11-08 14:06:33.300693 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-11-08 14:06:33.300699 | orchestrator | 2025-11-08 14:06:33.300705 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-11-08 14:06:33.300716 | orchestrator | Saturday 08 November 2025 14:04:01 +0000 (0:00:01.138) 0:00:48.664 ***** 2025-11-08 14:06:33.300727 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.300734 | orchestrator | 2025-11-08 14:06:33.300740 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-11-08 14:06:33.300747 | orchestrator | Saturday 08 November 2025 14:04:01 +0000 (0:00:00.129) 0:00:48.794 ***** 2025-11-08 14:06:33.300753 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.300760 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:33.300766 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:33.300772 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:06:33.300779 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:06:33.300785 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:06:33.300791 | orchestrator | 2025-11-08 14:06:33.300798 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-08 14:06:33.300803 | orchestrator | Saturday 08 November 2025 14:04:02 +0000 (0:00:00.846) 0:00:49.641 ***** 2025-11-08 14:06:33.300810 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 14:06:33.300818 | orchestrator | 2025-11-08 14:06:33.300824 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-11-08 14:06:33.300830 | orchestrator | Saturday 08 November 2025 14:04:03 +0000 (0:00:01.154) 0:00:50.796 ***** 2025-11-08 14:06:33.300841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.300849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.300857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.301116 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301160 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301184 | orchestrator | 2025-11-08 14:06:33.301191 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-11-08 14:06:33.301197 | orchestrator | Saturday 08 November 2025 14:04:06 +0000 (0:00:02.911) 0:00:53.707 ***** 2025-11-08 14:06:33.301205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 14:06:33.301211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301222 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.301234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 14:06:33.301242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301248 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:33.301259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 14:06:33.301266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301272 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:33.301279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301298 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:06:33.301337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301353 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:06:33.301363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301382 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:06:33.301388 | orchestrator | 2025-11-08 14:06:33.301395 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-11-08 14:06:33.301401 | orchestrator | Saturday 08 November 2025 14:04:08 +0000 (0:00:01.875) 0:00:55.582 ***** 2025-11-08 14:06:33.301412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 14:06:33.301420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301427 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.301433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 14:06:33.301443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301449 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:33.301455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 14:06:33.301466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301472 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:33.301483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301496 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:06:33.301508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.301545 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:06:33.301551 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:06:33.301557 | orchestrator | 2025-11-08 14:06:33.301563 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-11-08 14:06:33.301597 | orchestrator | Saturday 08 November 2025 14:04:09 +0000 (0:00:01.414) 0:00:56.997 ***** 2025-11-08 14:06:33.301604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.301614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.301647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.301679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.301706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302004 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302088 | orchestrator | 2025-11-08 14:06:33.302095 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-11-08 14:06:33.302101 | orchestrator | Saturday 08 November 2025 14:04:12 +0000 (0:00:03.157) 0:01:00.154 ***** 2025-11-08 14:06:33.302117 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-08 14:06:33.302124 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:06:33.302136 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-08 14:06:33.302142 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:06:33.302148 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-08 14:06:33.302155 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-08 14:06:33.302161 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:06:33.302167 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-08 14:06:33.302173 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-08 14:06:33.302180 | orchestrator | 2025-11-08 14:06:33.302186 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-11-08 14:06:33.302192 | orchestrator | Saturday 08 November 2025 14:04:15 +0000 (0:00:02.170) 0:01:02.325 ***** 2025-11-08 14:06:33.302199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.302215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.302222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.302233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302279 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302293 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302363 | orchestrator | 2025-11-08 14:06:33.302370 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-11-08 14:06:33.302376 | orchestrator | Saturday 08 November 2025 14:04:25 +0000 (0:00:10.606) 0:01:12.931 ***** 2025-11-08 14:06:33.302382 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.302388 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:33.302394 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:33.302399 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:06:33.302405 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:06:33.302411 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:06:33.302418 | orchestrator | 2025-11-08 14:06:33.302425 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-11-08 14:06:33.302431 | orchestrator | Saturday 08 November 2025 14:04:28 +0000 (0:00:02.470) 0:01:15.402 ***** 2025-11-08 14:06:33.302444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 14:06:33.302458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.302464 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.302475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 14:06:33.302482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.302511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-08 14:06:33.302517 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:33.302531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.302538 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:33.302550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.302563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.302570 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:06:33.302577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.302584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.302590 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:06:33.302601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.302613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-08 14:06:33.302620 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:06:33.302627 | orchestrator | 2025-11-08 14:06:33.302634 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-11-08 14:06:33.302641 | orchestrator | Saturday 08 November 2025 14:04:30 +0000 (0:00:01.954) 0:01:17.357 ***** 2025-11-08 14:06:33.302648 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.302656 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:33.302663 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:33.302670 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:06:33.302677 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:06:33.302683 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:06:33.302689 | orchestrator | 2025-11-08 14:06:33.302696 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-11-08 14:06:33.302702 | orchestrator | Saturday 08 November 2025 14:04:31 +0000 (0:00:00.856) 0:01:18.213 ***** 2025-11-08 14:06:33.302714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.302722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.302730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-08 14:06:33.302756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302848 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-08 14:06:33.302874 | orchestrator | 2025-11-08 14:06:33.302882 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-08 14:06:33.302890 | orchestrator | Saturday 08 November 2025 14:04:33 +0000 (0:00:02.970) 0:01:21.183 ***** 2025-11-08 14:06:33.302897 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.302905 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:06:33.302912 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:06:33.302918 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:06:33.302925 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:06:33.302932 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:06:33.302939 | orchestrator | 2025-11-08 14:06:33.302945 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-11-08 14:06:33.302951 | orchestrator | Saturday 08 November 2025 14:04:34 +0000 (0:00:00.637) 0:01:21.821 ***** 2025-11-08 14:06:33.302958 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:33.302965 | orchestrator | 2025-11-08 14:06:33.302972 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-11-08 14:06:33.302990 | orchestrator | Saturday 08 November 2025 14:04:37 +0000 (0:00:02.629) 0:01:24.451 ***** 2025-11-08 14:06:33.302998 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:33.303006 | orchestrator | 2025-11-08 14:06:33.303014 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-11-08 14:06:33.303021 | orchestrator | Saturday 08 November 2025 14:04:39 +0000 (0:00:02.366) 0:01:26.818 ***** 2025-11-08 14:06:33.303029 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:33.303037 | orchestrator | 2025-11-08 14:06:33.303045 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-08 14:06:33.303052 | orchestrator | Saturday 08 November 2025 14:05:00 +0000 (0:00:21.366) 0:01:48.185 ***** 2025-11-08 14:06:33.303059 | orchestrator | 2025-11-08 14:06:33.303066 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-08 14:06:33.303073 | orchestrator | Saturday 08 November 2025 14:05:01 +0000 (0:00:00.100) 0:01:48.285 ***** 2025-11-08 14:06:33.303081 | orchestrator | 2025-11-08 14:06:33.303088 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-08 14:06:33.303095 | orchestrator | Saturday 08 November 2025 14:05:01 +0000 (0:00:00.087) 0:01:48.372 ***** 2025-11-08 14:06:33.303103 | orchestrator | 2025-11-08 14:06:33.303109 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-08 14:06:33.303116 | orchestrator | Saturday 08 November 2025 14:05:01 +0000 (0:00:00.082) 0:01:48.454 ***** 2025-11-08 14:06:33.303123 | orchestrator | 2025-11-08 14:06:33.303129 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-08 14:06:33.303136 | orchestrator | Saturday 08 November 2025 14:05:01 +0000 (0:00:00.072) 0:01:48.526 ***** 2025-11-08 14:06:33.303142 | orchestrator | 2025-11-08 14:06:33.303154 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-08 14:06:33.303160 | orchestrator | Saturday 08 November 2025 14:05:01 +0000 (0:00:00.076) 0:01:48.603 ***** 2025-11-08 14:06:33.303167 | orchestrator | 2025-11-08 14:06:33.303173 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-11-08 14:06:33.303179 | orchestrator | Saturday 08 November 2025 14:05:01 +0000 (0:00:00.078) 0:01:48.681 ***** 2025-11-08 14:06:33.303185 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:33.303191 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:06:33.303197 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:06:33.303204 | orchestrator | 2025-11-08 14:06:33.303210 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-11-08 14:06:33.303217 | orchestrator | Saturday 08 November 2025 14:05:28 +0000 (0:00:27.453) 0:02:16.135 ***** 2025-11-08 14:06:33.303223 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:06:33.303230 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:06:33.303236 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:06:33.303243 | orchestrator | 2025-11-08 14:06:33.303249 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-11-08 14:06:33.303256 | orchestrator | Saturday 08 November 2025 14:05:37 +0000 (0:00:08.847) 0:02:24.983 ***** 2025-11-08 14:06:33.303263 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:06:33.303269 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:06:33.303275 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:06:33.303282 | orchestrator | 2025-11-08 14:06:33.303289 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-11-08 14:06:33.303295 | orchestrator | Saturday 08 November 2025 14:06:24 +0000 (0:00:46.866) 0:03:11.850 ***** 2025-11-08 14:06:33.303302 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:06:33.303330 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:06:33.303337 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:06:33.303344 | orchestrator | 2025-11-08 14:06:33.303350 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-11-08 14:06:33.303357 | orchestrator | Saturday 08 November 2025 14:06:30 +0000 (0:00:06.343) 0:03:18.194 ***** 2025-11-08 14:06:33.303369 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:06:33.303375 | orchestrator | 2025-11-08 14:06:33.303381 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:06:33.303393 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-08 14:06:33.303403 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-08 14:06:33.303409 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-08 14:06:33.303415 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-08 14:06:33.303422 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-08 14:06:33.303428 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-08 14:06:33.303435 | orchestrator | 2025-11-08 14:06:33.303441 | orchestrator | 2025-11-08 14:06:33.303448 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:06:33.303454 | orchestrator | Saturday 08 November 2025 14:06:31 +0000 (0:00:00.821) 0:03:19.015 ***** 2025-11-08 14:06:33.303461 | orchestrator | =============================================================================== 2025-11-08 14:06:33.303467 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 46.87s 2025-11-08 14:06:33.303474 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.45s 2025-11-08 14:06:33.303481 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.37s 2025-11-08 14:06:33.303487 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.61s 2025-11-08 14:06:33.303494 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.85s 2025-11-08 14:06:33.303500 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.24s 2025-11-08 14:06:33.303506 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.11s 2025-11-08 14:06:33.303513 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.34s 2025-11-08 14:06:33.303519 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.01s 2025-11-08 14:06:33.303526 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.62s 2025-11-08 14:06:33.303532 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.56s 2025-11-08 14:06:33.303539 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.48s 2025-11-08 14:06:33.303546 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.41s 2025-11-08 14:06:33.303552 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.16s 2025-11-08 14:06:33.303559 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.97s 2025-11-08 14:06:33.303566 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.91s 2025-11-08 14:06:33.303576 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.84s 2025-11-08 14:06:33.303583 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.63s 2025-11-08 14:06:33.303589 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.53s 2025-11-08 14:06:33.303595 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.50s 2025-11-08 14:06:33.303602 | orchestrator | 2025-11-08 14:06:33 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:33.303613 | orchestrator | 2025-11-08 14:06:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:36.338509 | orchestrator | 2025-11-08 14:06:36 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:36.339026 | orchestrator | 2025-11-08 14:06:36 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:06:36.339807 | orchestrator | 2025-11-08 14:06:36 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:36.339855 | orchestrator | 2025-11-08 14:06:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:39.386211 | orchestrator | 2025-11-08 14:06:39 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:39.387469 | orchestrator | 2025-11-08 14:06:39 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:06:39.388599 | orchestrator | 2025-11-08 14:06:39 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:39.388618 | orchestrator | 2025-11-08 14:06:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:42.430306 | orchestrator | 2025-11-08 14:06:42 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:42.430698 | orchestrator | 2025-11-08 14:06:42 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:06:42.431774 | orchestrator | 2025-11-08 14:06:42 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:42.431808 | orchestrator | 2025-11-08 14:06:42 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:45.465578 | orchestrator | 2025-11-08 14:06:45 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:45.468075 | orchestrator | 2025-11-08 14:06:45 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:06:45.471550 | orchestrator | 2025-11-08 14:06:45 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:45.471595 | orchestrator | 2025-11-08 14:06:45 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:48.518180 | orchestrator | 2025-11-08 14:06:48 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:48.521420 | orchestrator | 2025-11-08 14:06:48 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:06:48.524387 | orchestrator | 2025-11-08 14:06:48 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:48.524938 | orchestrator | 2025-11-08 14:06:48 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:51.562196 | orchestrator | 2025-11-08 14:06:51 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:51.563531 | orchestrator | 2025-11-08 14:06:51 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:06:51.565789 | orchestrator | 2025-11-08 14:06:51 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:51.565825 | orchestrator | 2025-11-08 14:06:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:54.606937 | orchestrator | 2025-11-08 14:06:54 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:54.610333 | orchestrator | 2025-11-08 14:06:54 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:06:54.613476 | orchestrator | 2025-11-08 14:06:54 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:54.613546 | orchestrator | 2025-11-08 14:06:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:06:57.658713 | orchestrator | 2025-11-08 14:06:57 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:06:57.660427 | orchestrator | 2025-11-08 14:06:57 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:06:57.661460 | orchestrator | 2025-11-08 14:06:57 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:06:57.661711 | orchestrator | 2025-11-08 14:06:57 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:00.713318 | orchestrator | 2025-11-08 14:07:00 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:00.714257 | orchestrator | 2025-11-08 14:07:00 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:00.720019 | orchestrator | 2025-11-08 14:07:00 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:00.720072 | orchestrator | 2025-11-08 14:07:00 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:03.851943 | orchestrator | 2025-11-08 14:07:03 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:03.852799 | orchestrator | 2025-11-08 14:07:03 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:03.854349 | orchestrator | 2025-11-08 14:07:03 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:03.854720 | orchestrator | 2025-11-08 14:07:03 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:06.893949 | orchestrator | 2025-11-08 14:07:06 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:06.894276 | orchestrator | 2025-11-08 14:07:06 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:06.897515 | orchestrator | 2025-11-08 14:07:06 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:06.897591 | orchestrator | 2025-11-08 14:07:06 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:09.942222 | orchestrator | 2025-11-08 14:07:09 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:09.942460 | orchestrator | 2025-11-08 14:07:09 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:09.943591 | orchestrator | 2025-11-08 14:07:09 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:09.943668 | orchestrator | 2025-11-08 14:07:09 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:12.990890 | orchestrator | 2025-11-08 14:07:12 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:12.991886 | orchestrator | 2025-11-08 14:07:12 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:12.993117 | orchestrator | 2025-11-08 14:07:12 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:12.993234 | orchestrator | 2025-11-08 14:07:12 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:16.039192 | orchestrator | 2025-11-08 14:07:16 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:16.040430 | orchestrator | 2025-11-08 14:07:16 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:16.041769 | orchestrator | 2025-11-08 14:07:16 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:16.041803 | orchestrator | 2025-11-08 14:07:16 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:19.086479 | orchestrator | 2025-11-08 14:07:19 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:19.089862 | orchestrator | 2025-11-08 14:07:19 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:19.092288 | orchestrator | 2025-11-08 14:07:19 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:19.092446 | orchestrator | 2025-11-08 14:07:19 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:22.134466 | orchestrator | 2025-11-08 14:07:22 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:22.135811 | orchestrator | 2025-11-08 14:07:22 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:22.137106 | orchestrator | 2025-11-08 14:07:22 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:22.137127 | orchestrator | 2025-11-08 14:07:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:25.180673 | orchestrator | 2025-11-08 14:07:25 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:25.183688 | orchestrator | 2025-11-08 14:07:25 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:25.186983 | orchestrator | 2025-11-08 14:07:25 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:25.187095 | orchestrator | 2025-11-08 14:07:25 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:28.229778 | orchestrator | 2025-11-08 14:07:28 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:28.231930 | orchestrator | 2025-11-08 14:07:28 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:28.234483 | orchestrator | 2025-11-08 14:07:28 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:28.234624 | orchestrator | 2025-11-08 14:07:28 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:31.280942 | orchestrator | 2025-11-08 14:07:31 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:31.283709 | orchestrator | 2025-11-08 14:07:31 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:31.285739 | orchestrator | 2025-11-08 14:07:31 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:31.285882 | orchestrator | 2025-11-08 14:07:31 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:34.329695 | orchestrator | 2025-11-08 14:07:34 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:34.331617 | orchestrator | 2025-11-08 14:07:34 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:34.333830 | orchestrator | 2025-11-08 14:07:34 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:34.333925 | orchestrator | 2025-11-08 14:07:34 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:37.378349 | orchestrator | 2025-11-08 14:07:37 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:37.379591 | orchestrator | 2025-11-08 14:07:37 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:37.381290 | orchestrator | 2025-11-08 14:07:37 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:37.381381 | orchestrator | 2025-11-08 14:07:37 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:40.436043 | orchestrator | 2025-11-08 14:07:40 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:40.436140 | orchestrator | 2025-11-08 14:07:40 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:40.437494 | orchestrator | 2025-11-08 14:07:40 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:40.437522 | orchestrator | 2025-11-08 14:07:40 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:43.483691 | orchestrator | 2025-11-08 14:07:43 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:43.485448 | orchestrator | 2025-11-08 14:07:43 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:43.488826 | orchestrator | 2025-11-08 14:07:43 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:43.489207 | orchestrator | 2025-11-08 14:07:43 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:46.533276 | orchestrator | 2025-11-08 14:07:46 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:46.534265 | orchestrator | 2025-11-08 14:07:46 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state STARTED 2025-11-08 14:07:46.535435 | orchestrator | 2025-11-08 14:07:46 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:46.535514 | orchestrator | 2025-11-08 14:07:46 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:49.586739 | orchestrator | 2025-11-08 14:07:49 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:07:49.589265 | orchestrator | 2025-11-08 14:07:49 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:49.590716 | orchestrator | 2025-11-08 14:07:49 | INFO  | Task 866c4974-8b8f-4527-9612-20fb157fc7cf is in state SUCCESS 2025-11-08 14:07:49.592177 | orchestrator | 2025-11-08 14:07:49 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:49.592207 | orchestrator | 2025-11-08 14:07:49 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:52.637644 | orchestrator | 2025-11-08 14:07:52 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:07:52.638188 | orchestrator | 2025-11-08 14:07:52 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:52.639914 | orchestrator | 2025-11-08 14:07:52 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:52.639961 | orchestrator | 2025-11-08 14:07:52 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:55.674876 | orchestrator | 2025-11-08 14:07:55 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:07:55.675416 | orchestrator | 2025-11-08 14:07:55 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:55.677973 | orchestrator | 2025-11-08 14:07:55 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:55.677997 | orchestrator | 2025-11-08 14:07:55 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:07:58.721600 | orchestrator | 2025-11-08 14:07:58 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:07:58.724561 | orchestrator | 2025-11-08 14:07:58 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:07:58.726939 | orchestrator | 2025-11-08 14:07:58 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:07:58.727229 | orchestrator | 2025-11-08 14:07:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:01.770193 | orchestrator | 2025-11-08 14:08:01 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:01.771839 | orchestrator | 2025-11-08 14:08:01 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:01.774604 | orchestrator | 2025-11-08 14:08:01 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:01.776377 | orchestrator | 2025-11-08 14:08:01 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:04.828277 | orchestrator | 2025-11-08 14:08:04 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:04.831361 | orchestrator | 2025-11-08 14:08:04 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:04.833059 | orchestrator | 2025-11-08 14:08:04 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:04.833105 | orchestrator | 2025-11-08 14:08:04 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:07.879888 | orchestrator | 2025-11-08 14:08:07 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:07.883800 | orchestrator | 2025-11-08 14:08:07 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:07.884484 | orchestrator | 2025-11-08 14:08:07 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:07.884828 | orchestrator | 2025-11-08 14:08:07 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:10.922074 | orchestrator | 2025-11-08 14:08:10 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:10.922921 | orchestrator | 2025-11-08 14:08:10 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:10.924409 | orchestrator | 2025-11-08 14:08:10 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:10.924430 | orchestrator | 2025-11-08 14:08:10 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:13.971869 | orchestrator | 2025-11-08 14:08:13 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:13.972947 | orchestrator | 2025-11-08 14:08:13 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:13.975205 | orchestrator | 2025-11-08 14:08:13 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:13.975220 | orchestrator | 2025-11-08 14:08:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:17.031795 | orchestrator | 2025-11-08 14:08:17 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:17.033689 | orchestrator | 2025-11-08 14:08:17 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:17.035731 | orchestrator | 2025-11-08 14:08:17 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:17.035782 | orchestrator | 2025-11-08 14:08:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:20.079305 | orchestrator | 2025-11-08 14:08:20 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:20.079881 | orchestrator | 2025-11-08 14:08:20 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:20.081387 | orchestrator | 2025-11-08 14:08:20 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:20.081420 | orchestrator | 2025-11-08 14:08:20 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:23.130068 | orchestrator | 2025-11-08 14:08:23 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:23.132970 | orchestrator | 2025-11-08 14:08:23 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:23.136094 | orchestrator | 2025-11-08 14:08:23 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:23.136147 | orchestrator | 2025-11-08 14:08:23 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:26.187195 | orchestrator | 2025-11-08 14:08:26 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:26.188631 | orchestrator | 2025-11-08 14:08:26 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:26.190249 | orchestrator | 2025-11-08 14:08:26 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:26.190381 | orchestrator | 2025-11-08 14:08:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:29.243988 | orchestrator | 2025-11-08 14:08:29 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:29.244123 | orchestrator | 2025-11-08 14:08:29 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:29.244476 | orchestrator | 2025-11-08 14:08:29 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:29.244969 | orchestrator | 2025-11-08 14:08:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:32.312896 | orchestrator | 2025-11-08 14:08:32 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:32.317306 | orchestrator | 2025-11-08 14:08:32 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:32.319734 | orchestrator | 2025-11-08 14:08:32 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:32.319818 | orchestrator | 2025-11-08 14:08:32 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:35.372684 | orchestrator | 2025-11-08 14:08:35 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:35.372776 | orchestrator | 2025-11-08 14:08:35 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:35.372786 | orchestrator | 2025-11-08 14:08:35 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:35.372794 | orchestrator | 2025-11-08 14:08:35 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:38.429888 | orchestrator | 2025-11-08 14:08:38 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:38.430267 | orchestrator | 2025-11-08 14:08:38 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:38.431388 | orchestrator | 2025-11-08 14:08:38 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state STARTED 2025-11-08 14:08:38.431427 | orchestrator | 2025-11-08 14:08:38 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:41.474352 | orchestrator | 2025-11-08 14:08:41 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:41.476429 | orchestrator | 2025-11-08 14:08:41 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:41.481404 | orchestrator | 2025-11-08 14:08:41 | INFO  | Task 6dfb4a94-bfc5-4e25-b0c4-e283d187353e is in state SUCCESS 2025-11-08 14:08:41.483968 | orchestrator | 2025-11-08 14:08:41.484053 | orchestrator | 2025-11-08 14:08:41.484061 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:08:41.484068 | orchestrator | 2025-11-08 14:08:41.484072 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:08:41.484078 | orchestrator | Saturday 08 November 2025 14:06:36 +0000 (0:00:00.218) 0:00:00.218 ***** 2025-11-08 14:08:41.484109 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:08:41.484116 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:08:41.484120 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:08:41.484125 | orchestrator | 2025-11-08 14:08:41.484129 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:08:41.484134 | orchestrator | Saturday 08 November 2025 14:06:37 +0000 (0:00:00.399) 0:00:00.618 ***** 2025-11-08 14:08:41.484139 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-11-08 14:08:41.484144 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-11-08 14:08:41.484149 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-11-08 14:08:41.484153 | orchestrator | 2025-11-08 14:08:41.484157 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-11-08 14:08:41.484162 | orchestrator | 2025-11-08 14:08:41.484166 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-11-08 14:08:41.484171 | orchestrator | Saturday 08 November 2025 14:06:38 +0000 (0:00:00.966) 0:00:01.585 ***** 2025-11-08 14:08:41.484176 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:08:41.484180 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:08:41.484185 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:08:41.484189 | orchestrator | 2025-11-08 14:08:41.484193 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:08:41.484199 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:08:41.484206 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:08:41.484210 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:08:41.484214 | orchestrator | 2025-11-08 14:08:41.484219 | orchestrator | 2025-11-08 14:08:41.484223 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:08:41.484227 | orchestrator | Saturday 08 November 2025 14:07:47 +0000 (0:01:09.808) 0:01:11.394 ***** 2025-11-08 14:08:41.484232 | orchestrator | =============================================================================== 2025-11-08 14:08:41.484236 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 69.81s 2025-11-08 14:08:41.484240 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-11-08 14:08:41.484245 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-11-08 14:08:41.484249 | orchestrator | 2025-11-08 14:08:41.484253 | orchestrator | 2025-11-08 14:08:41.484258 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:08:41.484262 | orchestrator | 2025-11-08 14:08:41.484266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:08:41.484271 | orchestrator | Saturday 08 November 2025 14:06:13 +0000 (0:00:00.335) 0:00:00.335 ***** 2025-11-08 14:08:41.484275 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:08:41.484279 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:08:41.484284 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:08:41.484288 | orchestrator | 2025-11-08 14:08:41.484292 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:08:41.484297 | orchestrator | Saturday 08 November 2025 14:06:13 +0000 (0:00:00.363) 0:00:00.698 ***** 2025-11-08 14:08:41.484356 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-11-08 14:08:41.484362 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-11-08 14:08:41.484366 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-11-08 14:08:41.484371 | orchestrator | 2025-11-08 14:08:41.484375 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-11-08 14:08:41.484380 | orchestrator | 2025-11-08 14:08:41.484384 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-11-08 14:08:41.484388 | orchestrator | Saturday 08 November 2025 14:06:14 +0000 (0:00:00.595) 0:00:01.293 ***** 2025-11-08 14:08:41.484397 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:08:41.484402 | orchestrator | 2025-11-08 14:08:41.484406 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-11-08 14:08:41.484411 | orchestrator | Saturday 08 November 2025 14:06:15 +0000 (0:00:00.756) 0:00:02.049 ***** 2025-11-08 14:08:41.484419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484462 | orchestrator | 2025-11-08 14:08:41.484466 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-11-08 14:08:41.484470 | orchestrator | Saturday 08 November 2025 14:06:16 +0000 (0:00:01.011) 0:00:03.061 ***** 2025-11-08 14:08:41.484475 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-11-08 14:08:41.484480 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-11-08 14:08:41.484508 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 14:08:41.484513 | orchestrator | 2025-11-08 14:08:41.484559 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-11-08 14:08:41.484565 | orchestrator | Saturday 08 November 2025 14:06:17 +0000 (0:00:00.888) 0:00:03.949 ***** 2025-11-08 14:08:41.484570 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:08:41.484603 | orchestrator | 2025-11-08 14:08:41.484608 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-11-08 14:08:41.484613 | orchestrator | Saturday 08 November 2025 14:06:17 +0000 (0:00:00.667) 0:00:04.616 ***** 2025-11-08 14:08:41.484623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484644 | orchestrator | 2025-11-08 14:08:41.484649 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-11-08 14:08:41.484658 | orchestrator | Saturday 08 November 2025 14:06:18 +0000 (0:00:01.291) 0:00:05.907 ***** 2025-11-08 14:08:41.484663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-08 14:08:41.484669 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:08:41.484674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-08 14:08:41.484679 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:08:41.484684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-08 14:08:41.484693 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:08:41.484697 | orchestrator | 2025-11-08 14:08:41.484702 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-11-08 14:08:41.484707 | orchestrator | Saturday 08 November 2025 14:06:19 +0000 (0:00:00.348) 0:00:06.255 ***** 2025-11-08 14:08:41.484716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-08 14:08:41.484721 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:08:41.484726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-08 14:08:41.484731 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:08:41.484742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-08 14:08:41.484748 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:08:41.484752 | orchestrator | 2025-11-08 14:08:41.484757 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-11-08 14:08:41.484762 | orchestrator | Saturday 08 November 2025 14:06:20 +0000 (0:00:00.669) 0:00:06.925 ***** 2025-11-08 14:08:41.484767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484788 | orchestrator | 2025-11-08 14:08:41.484796 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-11-08 14:08:41.484801 | orchestrator | Saturday 08 November 2025 14:06:21 +0000 (0:00:01.283) 0:00:08.209 ***** 2025-11-08 14:08:41.484806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.484828 | orchestrator | 2025-11-08 14:08:41.484833 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-11-08 14:08:41.484837 | orchestrator | Saturday 08 November 2025 14:06:22 +0000 (0:00:01.376) 0:00:09.585 ***** 2025-11-08 14:08:41.484842 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:08:41.484847 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:08:41.484852 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:08:41.484857 | orchestrator | 2025-11-08 14:08:41.484862 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-11-08 14:08:41.484867 | orchestrator | Saturday 08 November 2025 14:06:23 +0000 (0:00:00.426) 0:00:10.012 ***** 2025-11-08 14:08:41.484872 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-08 14:08:41.484882 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-08 14:08:41.484887 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-08 14:08:41.484892 | orchestrator | 2025-11-08 14:08:41.484896 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-11-08 14:08:41.484901 | orchestrator | Saturday 08 November 2025 14:06:24 +0000 (0:00:01.304) 0:00:11.317 ***** 2025-11-08 14:08:41.484906 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-08 14:08:41.484911 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-08 14:08:41.484916 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-08 14:08:41.484921 | orchestrator | 2025-11-08 14:08:41.484926 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-11-08 14:08:41.484931 | orchestrator | Saturday 08 November 2025 14:06:25 +0000 (0:00:01.322) 0:00:12.639 ***** 2025-11-08 14:08:41.484935 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 14:08:41.484940 | orchestrator | 2025-11-08 14:08:41.484944 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-11-08 14:08:41.484948 | orchestrator | Saturday 08 November 2025 14:06:26 +0000 (0:00:01.055) 0:00:13.694 ***** 2025-11-08 14:08:41.484953 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-11-08 14:08:41.484957 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-11-08 14:08:41.484961 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:08:41.484966 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:08:41.484970 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:08:41.484974 | orchestrator | 2025-11-08 14:08:41.484981 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-11-08 14:08:41.484986 | orchestrator | Saturday 08 November 2025 14:06:27 +0000 (0:00:00.863) 0:00:14.558 ***** 2025-11-08 14:08:41.484990 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:08:41.484994 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:08:41.485005 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:08:41.485010 | orchestrator | 2025-11-08 14:08:41.485014 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-11-08 14:08:41.485019 | orchestrator | Saturday 08 November 2025 14:06:28 +0000 (0:00:00.596) 0:00:15.155 ***** 2025-11-08 14:08:41.485024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090549, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6721945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090549, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6721945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090549, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6721945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090631, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6849964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090631, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6849964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090631, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6849964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090567, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6739962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090567, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6739962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090567, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6739962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090634, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6863616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090634, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6863616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090634, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6863616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090590, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6781015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090590, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6781015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090590, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6781015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090611, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.68259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090611, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.68259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090611, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.68259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090545, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6705892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090545, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6705892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090545, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6705892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090560, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6730776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090560, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6730776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090560, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6730776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090571, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.675341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090571, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.675341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090571, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.675341, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090600, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6794295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090600, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6794295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090600, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6794295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090627, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6843965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090627, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6843965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090627, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6843965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090564, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6736512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090564, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6736512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090564, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6736512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090607, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6813693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090607, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6813693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090607, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6813693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090595, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6786814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090595, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6786814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090595, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6786814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090583, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6769962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090583, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6769962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090583, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6769962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090578, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6763928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090578, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6763928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090578, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6763928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090602, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.680402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090602, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.680402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090602, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.680402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090575, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6759574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090575, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6759574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090622, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6834805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090575, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6759574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090622, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6834805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090906, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7565353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090622, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6834805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090906, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7565353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090659, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7068055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090906, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7565353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090659, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7068055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090648, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6899965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090659, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7068055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090648, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6899965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090716, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7097178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090648, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6899965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090716, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7097178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090639, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6878722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090716, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7097178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090639, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6878722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090875, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.750656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090639, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6878722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090875, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.750656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090718, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.718978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090875, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.750656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090718, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.718978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090881, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7510254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090718, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.718978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090881, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7510254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090900, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7549975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090881, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7510254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090900, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7549975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090739, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7496784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090900, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7549975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090739, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7496784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090712, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7091813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090739, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7496784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090712, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7091813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090655, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6949966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.485996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090712, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7091813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090655, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6949966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090709, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7079968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090655, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6949966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090709, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7079968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090652, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6919966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090709, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7079968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090652, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6919966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090714, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7091813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090652, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6919966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090714, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7091813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090893, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7546775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090714, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7091813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090893, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7546775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090888, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7529974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090893, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7546775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090641, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6890993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090888, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7529974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090888, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7529974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090645, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6898417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090641, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6890993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090736, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7203434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090641, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6890993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090645, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6898417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090884, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.751748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090736, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7203434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090645, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.6898417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090884, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.751748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090736, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.7203434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090884, 'dev': 94, 'nlink': 1, 'atime': 1762560146.0, 'mtime': 1762560146.0, 'ctime': 1762607710.751748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-08 14:08:41.486241 | orchestrator | 2025-11-08 14:08:41.486247 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-11-08 14:08:41.486253 | orchestrator | Saturday 08 November 2025 14:07:06 +0000 (0:00:38.695) 0:00:53.851 ***** 2025-11-08 14:08:41.486259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.486267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.486271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-08 14:08:41.486280 | orchestrator | 2025-11-08 14:08:41.486284 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-11-08 14:08:41.486289 | orchestrator | Saturday 08 November 2025 14:07:08 +0000 (0:00:01.121) 0:00:54.972 ***** 2025-11-08 14:08:41.486293 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:08:41.486297 | orchestrator | 2025-11-08 14:08:41.486302 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-11-08 14:08:41.486306 | orchestrator | Saturday 08 November 2025 14:07:10 +0000 (0:00:02.272) 0:00:57.244 ***** 2025-11-08 14:08:41.486310 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:08:41.486315 | orchestrator | 2025-11-08 14:08:41.486319 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-08 14:08:41.486323 | orchestrator | Saturday 08 November 2025 14:07:12 +0000 (0:00:02.168) 0:00:59.413 ***** 2025-11-08 14:08:41.486328 | orchestrator | 2025-11-08 14:08:41.486332 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-08 14:08:41.486336 | orchestrator | Saturday 08 November 2025 14:07:12 +0000 (0:00:00.069) 0:00:59.482 ***** 2025-11-08 14:08:41.486341 | orchestrator | 2025-11-08 14:08:41.486345 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-08 14:08:41.486350 | orchestrator | Saturday 08 November 2025 14:07:12 +0000 (0:00:00.089) 0:00:59.572 ***** 2025-11-08 14:08:41.486354 | orchestrator | 2025-11-08 14:08:41.486358 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-11-08 14:08:41.486362 | orchestrator | Saturday 08 November 2025 14:07:12 +0000 (0:00:00.270) 0:00:59.842 ***** 2025-11-08 14:08:41.486367 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:08:41.486371 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:08:41.486375 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:08:41.486380 | orchestrator | 2025-11-08 14:08:41.486384 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-11-08 14:08:41.486391 | orchestrator | Saturday 08 November 2025 14:07:20 +0000 (0:00:07.561) 0:01:07.404 ***** 2025-11-08 14:08:41.486396 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:08:41.486400 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:08:41.486404 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-11-08 14:08:41.486410 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-11-08 14:08:41.486414 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-11-08 14:08:41.486418 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:08:41.486423 | orchestrator | 2025-11-08 14:08:41.486427 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-11-08 14:08:41.486432 | orchestrator | Saturday 08 November 2025 14:07:59 +0000 (0:00:39.249) 0:01:46.653 ***** 2025-11-08 14:08:41.486436 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:08:41.486440 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:08:41.486445 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:08:41.486449 | orchestrator | 2025-11-08 14:08:41.486453 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-11-08 14:08:41.486458 | orchestrator | Saturday 08 November 2025 14:08:33 +0000 (0:00:33.379) 0:02:20.032 ***** 2025-11-08 14:08:41.486462 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:08:41.486466 | orchestrator | 2025-11-08 14:08:41.486470 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-11-08 14:08:41.486478 | orchestrator | Saturday 08 November 2025 14:08:35 +0000 (0:00:02.364) 0:02:22.397 ***** 2025-11-08 14:08:41.486482 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:08:41.486486 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:08:41.486491 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:08:41.486495 | orchestrator | 2025-11-08 14:08:41.486499 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-11-08 14:08:41.486504 | orchestrator | Saturday 08 November 2025 14:08:36 +0000 (0:00:00.584) 0:02:22.982 ***** 2025-11-08 14:08:41.486512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-11-08 14:08:41.486518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-11-08 14:08:41.486525 | orchestrator | 2025-11-08 14:08:41.486529 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-11-08 14:08:41.486533 | orchestrator | Saturday 08 November 2025 14:08:38 +0000 (0:00:02.505) 0:02:25.487 ***** 2025-11-08 14:08:41.486538 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:08:41.486542 | orchestrator | 2025-11-08 14:08:41.486546 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:08:41.486551 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 14:08:41.486556 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 14:08:41.486561 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 14:08:41.486565 | orchestrator | 2025-11-08 14:08:41.486570 | orchestrator | 2025-11-08 14:08:41.486587 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:08:41.486592 | orchestrator | Saturday 08 November 2025 14:08:39 +0000 (0:00:00.494) 0:02:25.982 ***** 2025-11-08 14:08:41.486596 | orchestrator | =============================================================================== 2025-11-08 14:08:41.486600 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.25s 2025-11-08 14:08:41.486605 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.70s 2025-11-08 14:08:41.486609 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.38s 2025-11-08 14:08:41.486613 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.56s 2025-11-08 14:08:41.486617 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.51s 2025-11-08 14:08:41.486622 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.36s 2025-11-08 14:08:41.486626 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.27s 2025-11-08 14:08:41.486630 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.17s 2025-11-08 14:08:41.486635 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.38s 2025-11-08 14:08:41.486639 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.32s 2025-11-08 14:08:41.486643 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.30s 2025-11-08 14:08:41.486648 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.29s 2025-11-08 14:08:41.486652 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.28s 2025-11-08 14:08:41.486662 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.12s 2025-11-08 14:08:41.486666 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.06s 2025-11-08 14:08:41.486671 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.01s 2025-11-08 14:08:41.486675 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.89s 2025-11-08 14:08:41.486680 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.86s 2025-11-08 14:08:41.486684 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.76s 2025-11-08 14:08:41.486688 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.67s 2025-11-08 14:08:41.486692 | orchestrator | 2025-11-08 14:08:41 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:44.516169 | orchestrator | 2025-11-08 14:08:44 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:44.516464 | orchestrator | 2025-11-08 14:08:44 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:44.516487 | orchestrator | 2025-11-08 14:08:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:47.555770 | orchestrator | 2025-11-08 14:08:47 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:47.557235 | orchestrator | 2025-11-08 14:08:47 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:47.557267 | orchestrator | 2025-11-08 14:08:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:50.597257 | orchestrator | 2025-11-08 14:08:50 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:50.597367 | orchestrator | 2025-11-08 14:08:50 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:50.597380 | orchestrator | 2025-11-08 14:08:50 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:53.633954 | orchestrator | 2025-11-08 14:08:53 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:53.634959 | orchestrator | 2025-11-08 14:08:53 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:53.634982 | orchestrator | 2025-11-08 14:08:53 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:56.673186 | orchestrator | 2025-11-08 14:08:56 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:56.673546 | orchestrator | 2025-11-08 14:08:56 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:56.674593 | orchestrator | 2025-11-08 14:08:56 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:08:59.718396 | orchestrator | 2025-11-08 14:08:59 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:08:59.719993 | orchestrator | 2025-11-08 14:08:59 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:08:59.720056 | orchestrator | 2025-11-08 14:08:59 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:02.762922 | orchestrator | 2025-11-08 14:09:02 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:02.763056 | orchestrator | 2025-11-08 14:09:02 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:02.763066 | orchestrator | 2025-11-08 14:09:02 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:05.809676 | orchestrator | 2025-11-08 14:09:05 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:05.811460 | orchestrator | 2025-11-08 14:09:05 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:05.811538 | orchestrator | 2025-11-08 14:09:05 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:08.860917 | orchestrator | 2025-11-08 14:09:08 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:08.864211 | orchestrator | 2025-11-08 14:09:08 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:08.864272 | orchestrator | 2025-11-08 14:09:08 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:11.905179 | orchestrator | 2025-11-08 14:09:11 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:11.906405 | orchestrator | 2025-11-08 14:09:11 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:11.906469 | orchestrator | 2025-11-08 14:09:11 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:14.948331 | orchestrator | 2025-11-08 14:09:14 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:14.948477 | orchestrator | 2025-11-08 14:09:14 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:14.948538 | orchestrator | 2025-11-08 14:09:14 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:17.991827 | orchestrator | 2025-11-08 14:09:17 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:17.993444 | orchestrator | 2025-11-08 14:09:17 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:17.993497 | orchestrator | 2025-11-08 14:09:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:21.044367 | orchestrator | 2025-11-08 14:09:21 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:21.045159 | orchestrator | 2025-11-08 14:09:21 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:21.045196 | orchestrator | 2025-11-08 14:09:21 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:24.104362 | orchestrator | 2025-11-08 14:09:24 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:24.107822 | orchestrator | 2025-11-08 14:09:24 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:24.107879 | orchestrator | 2025-11-08 14:09:24 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:27.159460 | orchestrator | 2025-11-08 14:09:27 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:27.162359 | orchestrator | 2025-11-08 14:09:27 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:27.162460 | orchestrator | 2025-11-08 14:09:27 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:30.200061 | orchestrator | 2025-11-08 14:09:30 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:30.201311 | orchestrator | 2025-11-08 14:09:30 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:30.201366 | orchestrator | 2025-11-08 14:09:30 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:33.249229 | orchestrator | 2025-11-08 14:09:33 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:33.249551 | orchestrator | 2025-11-08 14:09:33 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:33.249900 | orchestrator | 2025-11-08 14:09:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:36.305101 | orchestrator | 2025-11-08 14:09:36 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:36.306134 | orchestrator | 2025-11-08 14:09:36 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:36.306542 | orchestrator | 2025-11-08 14:09:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:39.345256 | orchestrator | 2025-11-08 14:09:39 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:39.346325 | orchestrator | 2025-11-08 14:09:39 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:39.346358 | orchestrator | 2025-11-08 14:09:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:42.379506 | orchestrator | 2025-11-08 14:09:42 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:42.381907 | orchestrator | 2025-11-08 14:09:42 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:42.381945 | orchestrator | 2025-11-08 14:09:42 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:45.424397 | orchestrator | 2025-11-08 14:09:45 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:45.425792 | orchestrator | 2025-11-08 14:09:45 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:45.425828 | orchestrator | 2025-11-08 14:09:45 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:48.469567 | orchestrator | 2025-11-08 14:09:48 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:48.471375 | orchestrator | 2025-11-08 14:09:48 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:48.471437 | orchestrator | 2025-11-08 14:09:48 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:51.511086 | orchestrator | 2025-11-08 14:09:51 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:51.513339 | orchestrator | 2025-11-08 14:09:51 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:51.513384 | orchestrator | 2025-11-08 14:09:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:54.567464 | orchestrator | 2025-11-08 14:09:54 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:54.567944 | orchestrator | 2025-11-08 14:09:54 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:54.569222 | orchestrator | 2025-11-08 14:09:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:09:57.615216 | orchestrator | 2025-11-08 14:09:57 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:09:57.617162 | orchestrator | 2025-11-08 14:09:57 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:09:57.617214 | orchestrator | 2025-11-08 14:09:57 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:00.659589 | orchestrator | 2025-11-08 14:10:00 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:00.662154 | orchestrator | 2025-11-08 14:10:00 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:00.662235 | orchestrator | 2025-11-08 14:10:00 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:03.702702 | orchestrator | 2025-11-08 14:10:03 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:03.705928 | orchestrator | 2025-11-08 14:10:03 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:03.706067 | orchestrator | 2025-11-08 14:10:03 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:06.753952 | orchestrator | 2025-11-08 14:10:06 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:06.755093 | orchestrator | 2025-11-08 14:10:06 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:06.755389 | orchestrator | 2025-11-08 14:10:06 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:09.803188 | orchestrator | 2025-11-08 14:10:09 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:09.804974 | orchestrator | 2025-11-08 14:10:09 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:09.805122 | orchestrator | 2025-11-08 14:10:09 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:12.848818 | orchestrator | 2025-11-08 14:10:12 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:12.851122 | orchestrator | 2025-11-08 14:10:12 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:12.851177 | orchestrator | 2025-11-08 14:10:12 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:15.901588 | orchestrator | 2025-11-08 14:10:15 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:15.902933 | orchestrator | 2025-11-08 14:10:15 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:15.902985 | orchestrator | 2025-11-08 14:10:15 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:18.948717 | orchestrator | 2025-11-08 14:10:18 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:18.948954 | orchestrator | 2025-11-08 14:10:18 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:18.949126 | orchestrator | 2025-11-08 14:10:18 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:22.002225 | orchestrator | 2025-11-08 14:10:22 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:22.002873 | orchestrator | 2025-11-08 14:10:22 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:22.002968 | orchestrator | 2025-11-08 14:10:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:25.057467 | orchestrator | 2025-11-08 14:10:25 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:25.061180 | orchestrator | 2025-11-08 14:10:25 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:25.062563 | orchestrator | 2025-11-08 14:10:25 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:28.111631 | orchestrator | 2025-11-08 14:10:28 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:28.113457 | orchestrator | 2025-11-08 14:10:28 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:28.113497 | orchestrator | 2025-11-08 14:10:28 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:31.152828 | orchestrator | 2025-11-08 14:10:31 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:31.155000 | orchestrator | 2025-11-08 14:10:31 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:31.155038 | orchestrator | 2025-11-08 14:10:31 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:34.198583 | orchestrator | 2025-11-08 14:10:34 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:34.199602 | orchestrator | 2025-11-08 14:10:34 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:34.199671 | orchestrator | 2025-11-08 14:10:34 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:37.253750 | orchestrator | 2025-11-08 14:10:37 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:37.256742 | orchestrator | 2025-11-08 14:10:37 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:37.256806 | orchestrator | 2025-11-08 14:10:37 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:40.308097 | orchestrator | 2025-11-08 14:10:40 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:40.311261 | orchestrator | 2025-11-08 14:10:40 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:40.311367 | orchestrator | 2025-11-08 14:10:40 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:43.358194 | orchestrator | 2025-11-08 14:10:43 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:43.360817 | orchestrator | 2025-11-08 14:10:43 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:43.360873 | orchestrator | 2025-11-08 14:10:43 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:46.420304 | orchestrator | 2025-11-08 14:10:46 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:46.423354 | orchestrator | 2025-11-08 14:10:46 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:46.423433 | orchestrator | 2025-11-08 14:10:46 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:49.470902 | orchestrator | 2025-11-08 14:10:49 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:49.471053 | orchestrator | 2025-11-08 14:10:49 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:49.471898 | orchestrator | 2025-11-08 14:10:49 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:52.514373 | orchestrator | 2025-11-08 14:10:52 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:52.516588 | orchestrator | 2025-11-08 14:10:52 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:52.516653 | orchestrator | 2025-11-08 14:10:52 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:55.578512 | orchestrator | 2025-11-08 14:10:55 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:55.579676 | orchestrator | 2025-11-08 14:10:55 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:55.579743 | orchestrator | 2025-11-08 14:10:55 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:10:58.618948 | orchestrator | 2025-11-08 14:10:58 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:10:58.619090 | orchestrator | 2025-11-08 14:10:58 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:10:58.619121 | orchestrator | 2025-11-08 14:10:58 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:01.656519 | orchestrator | 2025-11-08 14:11:01 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:01.660499 | orchestrator | 2025-11-08 14:11:01 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:01.660586 | orchestrator | 2025-11-08 14:11:01 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:04.700103 | orchestrator | 2025-11-08 14:11:04 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:04.700296 | orchestrator | 2025-11-08 14:11:04 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:04.700340 | orchestrator | 2025-11-08 14:11:04 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:07.741966 | orchestrator | 2025-11-08 14:11:07 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:07.742674 | orchestrator | 2025-11-08 14:11:07 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:07.742774 | orchestrator | 2025-11-08 14:11:07 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:10.789379 | orchestrator | 2025-11-08 14:11:10 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:10.793989 | orchestrator | 2025-11-08 14:11:10 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:10.794161 | orchestrator | 2025-11-08 14:11:10 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:13.844392 | orchestrator | 2025-11-08 14:11:13 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:13.847174 | orchestrator | 2025-11-08 14:11:13 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:13.847236 | orchestrator | 2025-11-08 14:11:13 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:16.903298 | orchestrator | 2025-11-08 14:11:16 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:16.903871 | orchestrator | 2025-11-08 14:11:16 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:16.903940 | orchestrator | 2025-11-08 14:11:16 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:19.949194 | orchestrator | 2025-11-08 14:11:19 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:19.949945 | orchestrator | 2025-11-08 14:11:19 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:19.949981 | orchestrator | 2025-11-08 14:11:19 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:22.992426 | orchestrator | 2025-11-08 14:11:22 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:22.994365 | orchestrator | 2025-11-08 14:11:22 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:22.994544 | orchestrator | 2025-11-08 14:11:22 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:26.053207 | orchestrator | 2025-11-08 14:11:26 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:26.055456 | orchestrator | 2025-11-08 14:11:26 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:26.056828 | orchestrator | 2025-11-08 14:11:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:29.090709 | orchestrator | 2025-11-08 14:11:29 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:29.091379 | orchestrator | 2025-11-08 14:11:29 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:29.091484 | orchestrator | 2025-11-08 14:11:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:32.127189 | orchestrator | 2025-11-08 14:11:32 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:32.128004 | orchestrator | 2025-11-08 14:11:32 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:32.128035 | orchestrator | 2025-11-08 14:11:32 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:35.177644 | orchestrator | 2025-11-08 14:11:35 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:35.178297 | orchestrator | 2025-11-08 14:11:35 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:35.178332 | orchestrator | 2025-11-08 14:11:35 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:38.231882 | orchestrator | 2025-11-08 14:11:38 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:38.232591 | orchestrator | 2025-11-08 14:11:38 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:38.232666 | orchestrator | 2025-11-08 14:11:38 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:41.282648 | orchestrator | 2025-11-08 14:11:41 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:41.283791 | orchestrator | 2025-11-08 14:11:41 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:41.283854 | orchestrator | 2025-11-08 14:11:41 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:44.330486 | orchestrator | 2025-11-08 14:11:44 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:44.331773 | orchestrator | 2025-11-08 14:11:44 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:44.331811 | orchestrator | 2025-11-08 14:11:44 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:47.371996 | orchestrator | 2025-11-08 14:11:47 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:47.373801 | orchestrator | 2025-11-08 14:11:47 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:47.373883 | orchestrator | 2025-11-08 14:11:47 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:50.420976 | orchestrator | 2025-11-08 14:11:50 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:50.421060 | orchestrator | 2025-11-08 14:11:50 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:50.421070 | orchestrator | 2025-11-08 14:11:50 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:53.465468 | orchestrator | 2025-11-08 14:11:53 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:53.466234 | orchestrator | 2025-11-08 14:11:53 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:53.466317 | orchestrator | 2025-11-08 14:11:53 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:56.508401 | orchestrator | 2025-11-08 14:11:56 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:56.509375 | orchestrator | 2025-11-08 14:11:56 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:56.509520 | orchestrator | 2025-11-08 14:11:56 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:11:59.540386 | orchestrator | 2025-11-08 14:11:59 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:11:59.541390 | orchestrator | 2025-11-08 14:11:59 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:11:59.541427 | orchestrator | 2025-11-08 14:11:59 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:02.575389 | orchestrator | 2025-11-08 14:12:02 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:02.575535 | orchestrator | 2025-11-08 14:12:02 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:02.575724 | orchestrator | 2025-11-08 14:12:02 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:05.613949 | orchestrator | 2025-11-08 14:12:05 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:05.614820 | orchestrator | 2025-11-08 14:12:05 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:05.615004 | orchestrator | 2025-11-08 14:12:05 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:08.665683 | orchestrator | 2025-11-08 14:12:08 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:08.668213 | orchestrator | 2025-11-08 14:12:08 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:08.668314 | orchestrator | 2025-11-08 14:12:08 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:11.717484 | orchestrator | 2025-11-08 14:12:11 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:11.718198 | orchestrator | 2025-11-08 14:12:11 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:11.718219 | orchestrator | 2025-11-08 14:12:11 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:14.760079 | orchestrator | 2025-11-08 14:12:14 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:14.761036 | orchestrator | 2025-11-08 14:12:14 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:14.761074 | orchestrator | 2025-11-08 14:12:14 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:17.802619 | orchestrator | 2025-11-08 14:12:17 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:17.803264 | orchestrator | 2025-11-08 14:12:17 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:17.803287 | orchestrator | 2025-11-08 14:12:17 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:20.846815 | orchestrator | 2025-11-08 14:12:20 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:20.846940 | orchestrator | 2025-11-08 14:12:20 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:20.846951 | orchestrator | 2025-11-08 14:12:20 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:23.887515 | orchestrator | 2025-11-08 14:12:23 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:23.889766 | orchestrator | 2025-11-08 14:12:23 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:23.890199 | orchestrator | 2025-11-08 14:12:23 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:26.935749 | orchestrator | 2025-11-08 14:12:26 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:26.937509 | orchestrator | 2025-11-08 14:12:26 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:26.937697 | orchestrator | 2025-11-08 14:12:26 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:29.987420 | orchestrator | 2025-11-08 14:12:29 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:29.989054 | orchestrator | 2025-11-08 14:12:29 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:29.989109 | orchestrator | 2025-11-08 14:12:29 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:33.040971 | orchestrator | 2025-11-08 14:12:33 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:33.042559 | orchestrator | 2025-11-08 14:12:33 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:33.042592 | orchestrator | 2025-11-08 14:12:33 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:36.089884 | orchestrator | 2025-11-08 14:12:36 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:36.092313 | orchestrator | 2025-11-08 14:12:36 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:36.092346 | orchestrator | 2025-11-08 14:12:36 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:39.137823 | orchestrator | 2025-11-08 14:12:39 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:39.139094 | orchestrator | 2025-11-08 14:12:39 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:39.139440 | orchestrator | 2025-11-08 14:12:39 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:42.185572 | orchestrator | 2025-11-08 14:12:42 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:42.187363 | orchestrator | 2025-11-08 14:12:42 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state STARTED 2025-11-08 14:12:42.187730 | orchestrator | 2025-11-08 14:12:42 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:45.240446 | orchestrator | 2025-11-08 14:12:45 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:45.247775 | orchestrator | 2025-11-08 14:12:45 | INFO  | Task b7476797-25df-40b2-a203-5df96e78be64 is in state SUCCESS 2025-11-08 14:12:45.250387 | orchestrator | 2025-11-08 14:12:45.250467 | orchestrator | 2025-11-08 14:12:45.250483 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:12:45.250497 | orchestrator | 2025-11-08 14:12:45.250506 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-11-08 14:12:45.250513 | orchestrator | Saturday 08 November 2025 14:03:27 +0000 (0:00:00.296) 0:00:00.296 ***** 2025-11-08 14:12:45.250520 | orchestrator | changed: [testbed-manager] 2025-11-08 14:12:45.250529 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.250536 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:12:45.250542 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:12:45.250549 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.250565 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.250579 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.250585 | orchestrator | 2025-11-08 14:12:45.250592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:12:45.250598 | orchestrator | Saturday 08 November 2025 14:03:28 +0000 (0:00:00.875) 0:00:01.171 ***** 2025-11-08 14:12:45.250604 | orchestrator | changed: [testbed-manager] 2025-11-08 14:12:45.250610 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.250617 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:12:45.250623 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:12:45.250629 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.250635 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.250641 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.250647 | orchestrator | 2025-11-08 14:12:45.250653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:12:45.250660 | orchestrator | Saturday 08 November 2025 14:03:29 +0000 (0:00:00.779) 0:00:01.951 ***** 2025-11-08 14:12:45.250666 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-11-08 14:12:45.250673 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-11-08 14:12:45.250693 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-11-08 14:12:45.250700 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-11-08 14:12:45.250728 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-11-08 14:12:45.250734 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-11-08 14:12:45.250740 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-11-08 14:12:45.250746 | orchestrator | 2025-11-08 14:12:45.250753 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-11-08 14:12:45.250759 | orchestrator | 2025-11-08 14:12:45.250765 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-11-08 14:12:45.250771 | orchestrator | Saturday 08 November 2025 14:03:30 +0000 (0:00:00.935) 0:00:02.887 ***** 2025-11-08 14:12:45.250777 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:12:45.250784 | orchestrator | 2025-11-08 14:12:45.250790 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-11-08 14:12:45.250796 | orchestrator | Saturday 08 November 2025 14:03:31 +0000 (0:00:00.759) 0:00:03.646 ***** 2025-11-08 14:12:45.250802 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-11-08 14:12:45.250809 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-11-08 14:12:45.250815 | orchestrator | 2025-11-08 14:12:45.250822 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-11-08 14:12:45.250828 | orchestrator | Saturday 08 November 2025 14:03:35 +0000 (0:00:04.494) 0:00:08.141 ***** 2025-11-08 14:12:45.250834 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-08 14:12:45.250840 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-08 14:12:45.250846 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.250852 | orchestrator | 2025-11-08 14:12:45.250859 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-11-08 14:12:45.250865 | orchestrator | Saturday 08 November 2025 14:03:40 +0000 (0:00:04.535) 0:00:12.677 ***** 2025-11-08 14:12:45.250883 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.250890 | orchestrator | 2025-11-08 14:12:45.250896 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-11-08 14:12:45.250902 | orchestrator | Saturday 08 November 2025 14:03:40 +0000 (0:00:00.664) 0:00:13.342 ***** 2025-11-08 14:12:45.250908 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.250914 | orchestrator | 2025-11-08 14:12:45.250921 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-11-08 14:12:45.250927 | orchestrator | Saturday 08 November 2025 14:03:42 +0000 (0:00:01.308) 0:00:14.650 ***** 2025-11-08 14:12:45.250933 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.250939 | orchestrator | 2025-11-08 14:12:45.250945 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-08 14:12:45.250951 | orchestrator | Saturday 08 November 2025 14:03:44 +0000 (0:00:02.840) 0:00:17.491 ***** 2025-11-08 14:12:45.250957 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.250965 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.250972 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.250979 | orchestrator | 2025-11-08 14:12:45.250986 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-11-08 14:12:45.250993 | orchestrator | Saturday 08 November 2025 14:03:45 +0000 (0:00:00.432) 0:00:17.923 ***** 2025-11-08 14:12:45.251001 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:12:45.251008 | orchestrator | 2025-11-08 14:12:45.251015 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-11-08 14:12:45.251022 | orchestrator | Saturday 08 November 2025 14:04:19 +0000 (0:00:34.560) 0:00:52.483 ***** 2025-11-08 14:12:45.251029 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.251035 | orchestrator | 2025-11-08 14:12:45.251042 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-08 14:12:45.251049 | orchestrator | Saturday 08 November 2025 14:04:36 +0000 (0:00:16.168) 0:01:08.651 ***** 2025-11-08 14:12:45.251056 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:12:45.251063 | orchestrator | 2025-11-08 14:12:45.251077 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-08 14:12:45.251084 | orchestrator | Saturday 08 November 2025 14:04:49 +0000 (0:00:13.065) 0:01:21.717 ***** 2025-11-08 14:12:45.251103 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:12:45.251110 | orchestrator | 2025-11-08 14:12:45.251117 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-11-08 14:12:45.251124 | orchestrator | Saturday 08 November 2025 14:04:50 +0000 (0:00:01.363) 0:01:23.081 ***** 2025-11-08 14:12:45.251131 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.251138 | orchestrator | 2025-11-08 14:12:45.251145 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-08 14:12:45.251152 | orchestrator | Saturday 08 November 2025 14:04:51 +0000 (0:00:00.622) 0:01:23.704 ***** 2025-11-08 14:12:45.251159 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:12:45.251167 | orchestrator | 2025-11-08 14:12:45.251174 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-11-08 14:12:45.251181 | orchestrator | Saturday 08 November 2025 14:04:51 +0000 (0:00:00.541) 0:01:24.245 ***** 2025-11-08 14:12:45.251188 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:12:45.251195 | orchestrator | 2025-11-08 14:12:45.251201 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-11-08 14:12:45.251208 | orchestrator | Saturday 08 November 2025 14:05:10 +0000 (0:00:18.792) 0:01:43.038 ***** 2025-11-08 14:12:45.251215 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.251222 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251229 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251235 | orchestrator | 2025-11-08 14:12:45.251242 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-11-08 14:12:45.251249 | orchestrator | 2025-11-08 14:12:45.251256 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-11-08 14:12:45.251263 | orchestrator | Saturday 08 November 2025 14:05:10 +0000 (0:00:00.373) 0:01:43.411 ***** 2025-11-08 14:12:45.251275 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:12:45.251282 | orchestrator | 2025-11-08 14:12:45.251289 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-11-08 14:12:45.251296 | orchestrator | Saturday 08 November 2025 14:05:11 +0000 (0:00:00.674) 0:01:44.086 ***** 2025-11-08 14:12:45.251303 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251310 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251316 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.251323 | orchestrator | 2025-11-08 14:12:45.251330 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-11-08 14:12:45.251337 | orchestrator | Saturday 08 November 2025 14:05:13 +0000 (0:00:02.242) 0:01:46.328 ***** 2025-11-08 14:12:45.251344 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251351 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251358 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.251366 | orchestrator | 2025-11-08 14:12:45.251373 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-11-08 14:12:45.251380 | orchestrator | Saturday 08 November 2025 14:05:16 +0000 (0:00:02.391) 0:01:48.720 ***** 2025-11-08 14:12:45.251386 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.251393 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251399 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251405 | orchestrator | 2025-11-08 14:12:45.251411 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-11-08 14:12:45.251417 | orchestrator | Saturday 08 November 2025 14:05:16 +0000 (0:00:00.318) 0:01:49.038 ***** 2025-11-08 14:12:45.251423 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-08 14:12:45.251430 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251436 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-08 14:12:45.251442 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251453 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-08 14:12:45.251460 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-11-08 14:12:45.251466 | orchestrator | 2025-11-08 14:12:45.251472 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-11-08 14:12:45.251479 | orchestrator | Saturday 08 November 2025 14:05:25 +0000 (0:00:08.885) 0:01:57.924 ***** 2025-11-08 14:12:45.251485 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.251491 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251497 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251503 | orchestrator | 2025-11-08 14:12:45.251510 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-11-08 14:12:45.251516 | orchestrator | Saturday 08 November 2025 14:05:25 +0000 (0:00:00.416) 0:01:58.340 ***** 2025-11-08 14:12:45.251522 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-08 14:12:45.251528 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.251535 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-08 14:12:45.251541 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251547 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-08 14:12:45.251553 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251559 | orchestrator | 2025-11-08 14:12:45.251565 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-11-08 14:12:45.251571 | orchestrator | Saturday 08 November 2025 14:05:26 +0000 (0:00:00.666) 0:01:59.007 ***** 2025-11-08 14:12:45.251577 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251584 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251590 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.251596 | orchestrator | 2025-11-08 14:12:45.251602 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-11-08 14:12:45.251608 | orchestrator | Saturday 08 November 2025 14:05:27 +0000 (0:00:00.706) 0:01:59.713 ***** 2025-11-08 14:12:45.251614 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251621 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251627 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.251633 | orchestrator | 2025-11-08 14:12:45.251639 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-11-08 14:12:45.251645 | orchestrator | Saturday 08 November 2025 14:05:28 +0000 (0:00:01.135) 0:02:00.849 ***** 2025-11-08 14:12:45.251651 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251658 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251669 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.251676 | orchestrator | 2025-11-08 14:12:45.251682 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-11-08 14:12:45.251688 | orchestrator | Saturday 08 November 2025 14:05:31 +0000 (0:00:02.988) 0:02:03.837 ***** 2025-11-08 14:12:45.251694 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251701 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251707 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:12:45.251713 | orchestrator | 2025-11-08 14:12:45.251719 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-08 14:12:45.251725 | orchestrator | Saturday 08 November 2025 14:05:54 +0000 (0:00:23.299) 0:02:27.137 ***** 2025-11-08 14:12:45.251731 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251738 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251744 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:12:45.251750 | orchestrator | 2025-11-08 14:12:45.251756 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-08 14:12:45.251762 | orchestrator | Saturday 08 November 2025 14:06:07 +0000 (0:00:13.445) 0:02:40.582 ***** 2025-11-08 14:12:45.251769 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:12:45.251775 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251781 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251787 | orchestrator | 2025-11-08 14:12:45.251801 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-11-08 14:12:45.251807 | orchestrator | Saturday 08 November 2025 14:06:09 +0000 (0:00:01.330) 0:02:41.913 ***** 2025-11-08 14:12:45.251813 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251819 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251825 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.251831 | orchestrator | 2025-11-08 14:12:45.251838 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-11-08 14:12:45.251848 | orchestrator | Saturday 08 November 2025 14:06:22 +0000 (0:00:13.547) 0:02:55.460 ***** 2025-11-08 14:12:45.251855 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.251861 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251867 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251927 | orchestrator | 2025-11-08 14:12:45.251934 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-11-08 14:12:45.251940 | orchestrator | Saturday 08 November 2025 14:06:23 +0000 (0:00:01.012) 0:02:56.472 ***** 2025-11-08 14:12:45.251946 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.251953 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.251959 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.251965 | orchestrator | 2025-11-08 14:12:45.251971 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-11-08 14:12:45.251977 | orchestrator | 2025-11-08 14:12:45.251983 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-08 14:12:45.251990 | orchestrator | Saturday 08 November 2025 14:06:24 +0000 (0:00:00.491) 0:02:56.963 ***** 2025-11-08 14:12:45.251996 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:12:45.252003 | orchestrator | 2025-11-08 14:12:45.252009 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-11-08 14:12:45.252015 | orchestrator | Saturday 08 November 2025 14:06:24 +0000 (0:00:00.591) 0:02:57.555 ***** 2025-11-08 14:12:45.252022 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-11-08 14:12:45.252028 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-11-08 14:12:45.252034 | orchestrator | 2025-11-08 14:12:45.252040 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-11-08 14:12:45.252047 | orchestrator | Saturday 08 November 2025 14:06:28 +0000 (0:00:03.516) 0:03:01.072 ***** 2025-11-08 14:12:45.252053 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-11-08 14:12:45.252061 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-11-08 14:12:45.252067 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-11-08 14:12:45.252073 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-11-08 14:12:45.252080 | orchestrator | 2025-11-08 14:12:45.252086 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-11-08 14:12:45.252092 | orchestrator | Saturday 08 November 2025 14:06:35 +0000 (0:00:06.826) 0:03:07.899 ***** 2025-11-08 14:12:45.252099 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-08 14:12:45.252105 | orchestrator | 2025-11-08 14:12:45.252111 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-11-08 14:12:45.252117 | orchestrator | Saturday 08 November 2025 14:06:38 +0000 (0:00:03.245) 0:03:11.144 ***** 2025-11-08 14:12:45.252124 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-08 14:12:45.252130 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-11-08 14:12:45.252136 | orchestrator | 2025-11-08 14:12:45.252142 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-11-08 14:12:45.252149 | orchestrator | Saturday 08 November 2025 14:06:42 +0000 (0:00:03.900) 0:03:15.045 ***** 2025-11-08 14:12:45.252160 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-08 14:12:45.252166 | orchestrator | 2025-11-08 14:12:45.252172 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-11-08 14:12:45.252179 | orchestrator | Saturday 08 November 2025 14:06:46 +0000 (0:00:03.700) 0:03:18.745 ***** 2025-11-08 14:12:45.252185 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-11-08 14:12:45.252191 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-11-08 14:12:45.252198 | orchestrator | 2025-11-08 14:12:45.252204 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-11-08 14:12:45.252215 | orchestrator | Saturday 08 November 2025 14:06:53 +0000 (0:00:07.651) 0:03:26.397 ***** 2025-11-08 14:12:45.252230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.252241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.252250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.252268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.252277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.252287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.252294 | orchestrator | 2025-11-08 14:12:45.252300 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-11-08 14:12:45.252307 | orchestrator | Saturday 08 November 2025 14:06:55 +0000 (0:00:01.358) 0:03:27.756 ***** 2025-11-08 14:12:45.252313 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.252319 | orchestrator | 2025-11-08 14:12:45.252325 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-11-08 14:12:45.252331 | orchestrator | Saturday 08 November 2025 14:06:55 +0000 (0:00:00.126) 0:03:27.883 ***** 2025-11-08 14:12:45.252337 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.252344 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.252350 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.252356 | orchestrator | 2025-11-08 14:12:45.252362 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-11-08 14:12:45.252368 | orchestrator | Saturday 08 November 2025 14:06:55 +0000 (0:00:00.345) 0:03:28.229 ***** 2025-11-08 14:12:45.252375 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-08 14:12:45.252381 | orchestrator | 2025-11-08 14:12:45.252387 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-11-08 14:12:45.252393 | orchestrator | Saturday 08 November 2025 14:06:56 +0000 (0:00:01.028) 0:03:29.257 ***** 2025-11-08 14:12:45.252399 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.252405 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.252412 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.252418 | orchestrator | 2025-11-08 14:12:45.252424 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-08 14:12:45.252430 | orchestrator | Saturday 08 November 2025 14:06:56 +0000 (0:00:00.363) 0:03:29.620 ***** 2025-11-08 14:12:45.252441 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:12:45.252447 | orchestrator | 2025-11-08 14:12:45.252453 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-11-08 14:12:45.252459 | orchestrator | Saturday 08 November 2025 14:06:57 +0000 (0:00:00.622) 0:03:30.242 ***** 2025-11-08 14:12:45.252466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.252478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.252489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.252501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.252507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.252518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.252525 | orchestrator | 2025-11-08 14:12:45.252531 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-11-08 14:12:45.252537 | orchestrator | Saturday 08 November 2025 14:07:00 +0000 (0:00:02.917) 0:03:33.160 ***** 2025-11-08 14:12:45.252551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 14:12:45.252559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.252565 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.252576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 14:12:45.252583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.252590 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.252601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 14:12:45.252612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.252619 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.252625 | orchestrator | 2025-11-08 14:12:45.252632 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-11-08 14:12:45.252638 | orchestrator | Saturday 08 November 2025 14:07:01 +0000 (0:00:01.002) 0:03:34.163 ***** 2025-11-08 14:12:45.252649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 14:12:45.252656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.252663 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.252705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/',2025-11-08 14:12:45 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:45.252934 | orchestrator | '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 14:12:45.252953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.252961 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.252968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 14:12:45.252981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.252987 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.252994 | orchestrator | 2025-11-08 14:12:45.253000 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-11-08 14:12:45.253006 | orchestrator | Saturday 08 November 2025 14:07:02 +0000 (0:00:00.952) 0:03:35.115 ***** 2025-11-08 14:12:45.253018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.253030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.253041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.253048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.253060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.253067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.253073 | orchestrator | 2025-11-08 14:12:45.253080 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-11-08 14:12:45.253086 | orchestrator | Saturday 08 November 2025 14:07:04 +0000 (0:00:02.512) 0:03:37.628 ***** 2025-11-08 14:12:45.253095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.253107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.253119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.253126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.253141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.253147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.253154 | orchestrator | 2025-11-08 14:12:45.253160 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-11-08 14:12:45.253167 | orchestrator | Saturday 08 November 2025 14:07:11 +0000 (0:00:06.247) 0:03:43.875 ***** 2025-11-08 14:12:45.253173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 14:12:45.253183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.253190 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.253200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 14:12:45.253212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.253218 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.253225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-08 14:12:45.253232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.253238 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.253245 | orchestrator | 2025-11-08 14:12:45.253251 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-11-08 14:12:45.253257 | orchestrator | Saturday 08 November 2025 14:07:11 +0000 (0:00:00.630) 0:03:44.505 ***** 2025-11-08 14:12:45.253264 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.253270 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:12:45.253276 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:12:45.253283 | orchestrator | 2025-11-08 14:12:45.253292 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-11-08 14:12:45.253299 | orchestrator | Saturday 08 November 2025 14:07:13 +0000 (0:00:01.688) 0:03:46.193 ***** 2025-11-08 14:12:45.253305 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.253311 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.253322 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.253328 | orchestrator | 2025-11-08 14:12:45.253335 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-11-08 14:12:45.253341 | orchestrator | Saturday 08 November 2025 14:07:13 +0000 (0:00:00.340) 0:03:46.534 ***** 2025-11-08 14:12:45.253354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.253361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.253372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-08 14:12:45.253383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.253393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.253400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.253406 | orchestrator | 2025-11-08 14:12:45.253413 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-08 14:12:45.253419 | orchestrator | Saturday 08 November 2025 14:07:16 +0000 (0:00:02.343) 0:03:48.878 ***** 2025-11-08 14:12:45.253425 | orchestrator | 2025-11-08 14:12:45.253432 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-08 14:12:45.253438 | orchestrator | Saturday 08 November 2025 14:07:16 +0000 (0:00:00.142) 0:03:49.020 ***** 2025-11-08 14:12:45.253444 | orchestrator | 2025-11-08 14:12:45.253450 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-08 14:12:45.253456 | orchestrator | Saturday 08 November 2025 14:07:16 +0000 (0:00:00.135) 0:03:49.155 ***** 2025-11-08 14:12:45.253462 | orchestrator | 2025-11-08 14:12:45.253468 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-11-08 14:12:45.253475 | orchestrator | Saturday 08 November 2025 14:07:16 +0000 (0:00:00.146) 0:03:49.302 ***** 2025-11-08 14:12:45.253481 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.253487 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:12:45.253493 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:12:45.253499 | orchestrator | 2025-11-08 14:12:45.253506 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-11-08 14:12:45.253512 | orchestrator | Saturday 08 November 2025 14:07:39 +0000 (0:00:22.559) 0:04:11.862 ***** 2025-11-08 14:12:45.253518 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:12:45.253524 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.253531 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:12:45.253537 | orchestrator | 2025-11-08 14:12:45.253543 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-11-08 14:12:45.253549 | orchestrator | 2025-11-08 14:12:45.253556 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-08 14:12:45.253563 | orchestrator | Saturday 08 November 2025 14:07:50 +0000 (0:00:11.030) 0:04:22.892 ***** 2025-11-08 14:12:45.253570 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:12:45.253582 | orchestrator | 2025-11-08 14:12:45.253589 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-08 14:12:45.253596 | orchestrator | Saturday 08 November 2025 14:07:51 +0000 (0:00:01.348) 0:04:24.241 ***** 2025-11-08 14:12:45.253603 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.253611 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.253618 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.253625 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.253632 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.253639 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.253645 | orchestrator | 2025-11-08 14:12:45.253652 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-11-08 14:12:45.253658 | orchestrator | Saturday 08 November 2025 14:07:52 +0000 (0:00:00.690) 0:04:24.932 ***** 2025-11-08 14:12:45.253664 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.253670 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.253676 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.253682 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 14:12:45.253689 | orchestrator | 2025-11-08 14:12:45.253695 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-08 14:12:45.253705 | orchestrator | Saturday 08 November 2025 14:07:53 +0000 (0:00:01.172) 0:04:26.104 ***** 2025-11-08 14:12:45.253712 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-11-08 14:12:45.253718 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-11-08 14:12:45.253724 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-11-08 14:12:45.253730 | orchestrator | 2025-11-08 14:12:45.253737 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-08 14:12:45.253743 | orchestrator | Saturday 08 November 2025 14:07:54 +0000 (0:00:00.727) 0:04:26.832 ***** 2025-11-08 14:12:45.253749 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-11-08 14:12:45.253756 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-11-08 14:12:45.253762 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-11-08 14:12:45.253768 | orchestrator | 2025-11-08 14:12:45.253774 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-08 14:12:45.253780 | orchestrator | Saturday 08 November 2025 14:07:55 +0000 (0:00:01.429) 0:04:28.261 ***** 2025-11-08 14:12:45.253787 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-11-08 14:12:45.253793 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.253799 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-11-08 14:12:45.253805 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.253812 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-11-08 14:12:45.253818 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.253824 | orchestrator | 2025-11-08 14:12:45.253830 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-11-08 14:12:45.253839 | orchestrator | Saturday 08 November 2025 14:07:56 +0000 (0:00:00.580) 0:04:28.841 ***** 2025-11-08 14:12:45.253846 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-08 14:12:45.253852 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-08 14:12:45.253858 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.253865 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-08 14:12:45.253892 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-08 14:12:45.253899 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.253905 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-08 14:12:45.253911 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-08 14:12:45.253918 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-08 14:12:45.253929 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.253935 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-08 14:12:45.253942 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-08 14:12:45.253948 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-08 14:12:45.253954 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-08 14:12:45.253960 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-08 14:12:45.253967 | orchestrator | 2025-11-08 14:12:45.253973 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-11-08 14:12:45.253979 | orchestrator | Saturday 08 November 2025 14:07:58 +0000 (0:00:02.178) 0:04:31.020 ***** 2025-11-08 14:12:45.253986 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.253992 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.253998 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.254004 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.254010 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.254062 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.254069 | orchestrator | 2025-11-08 14:12:45.254076 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-11-08 14:12:45.254082 | orchestrator | Saturday 08 November 2025 14:07:59 +0000 (0:00:01.135) 0:04:32.155 ***** 2025-11-08 14:12:45.254088 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.254095 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.254101 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.254107 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.254114 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.254120 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.254127 | orchestrator | 2025-11-08 14:12:45.254134 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-11-08 14:12:45.254140 | orchestrator | Saturday 08 November 2025 14:08:01 +0000 (0:00:01.649) 0:04:33.805 ***** 2025-11-08 14:12:45.254146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.254969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255322 | orchestrator | 2025-11-08 14:12:45.255332 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-08 14:12:45.255342 | orchestrator | Saturday 08 November 2025 14:08:04 +0000 (0:00:02.998) 0:04:36.804 ***** 2025-11-08 14:12:45.255353 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:12:45.255361 | orchestrator | 2025-11-08 14:12:45.255369 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-11-08 14:12:45.255377 | orchestrator | Saturday 08 November 2025 14:08:05 +0000 (0:00:01.225) 0:04:38.030 ***** 2025-11-08 14:12:45.255385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255408 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.255544 | orchestrator | 2025-11-08 14:12:45.255553 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-11-08 14:12:45.255562 | orchestrator | Saturday 08 November 2025 14:08:09 +0000 (0:00:04.031) 0:04:42.061 ***** 2025-11-08 14:12:45.255575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.255594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.255603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.255611 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.255621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.255631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.255652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.255679 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.255702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.255715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.255729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.255742 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.255755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-08 14:12:45.255768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.255802 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.255826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-08 14:12:45.255841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.255854 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.255898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-08 14:12:45.255913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.255925 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.255938 | orchestrator | 2025-11-08 14:12:45.255950 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-11-08 14:12:45.255963 | orchestrator | Saturday 08 November 2025 14:08:11 +0000 (0:00:01.719) 0:04:43.781 ***** 2025-11-08 14:12:45.255976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.255991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.256024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.256040 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.256059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.256073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.256082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.256090 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.256098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.256122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.256131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.256139 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.256151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-08 14:12:45.256159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.256167 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.256176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-08 14:12:45.256190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.256198 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.256206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-08 14:12:45.256220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.256228 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.256236 | orchestrator | 2025-11-08 14:12:45.256244 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-08 14:12:45.256252 | orchestrator | Saturday 08 November 2025 14:08:13 +0000 (0:00:02.602) 0:04:46.383 ***** 2025-11-08 14:12:45.256260 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.256268 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.256276 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.256284 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-08 14:12:45.256292 | orchestrator | 2025-11-08 14:12:45.256300 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-11-08 14:12:45.256308 | orchestrator | Saturday 08 November 2025 14:08:14 +0000 (0:00:01.143) 0:04:47.527 ***** 2025-11-08 14:12:45.256316 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-08 14:12:45.256328 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-08 14:12:45.256336 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-08 14:12:45.256344 | orchestrator | 2025-11-08 14:12:45.256352 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-11-08 14:12:45.256359 | orchestrator | Saturday 08 November 2025 14:08:16 +0000 (0:00:01.240) 0:04:48.768 ***** 2025-11-08 14:12:45.256367 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-08 14:12:45.256375 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-08 14:12:45.256383 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-08 14:12:45.256391 | orchestrator | 2025-11-08 14:12:45.256399 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-11-08 14:12:45.256406 | orchestrator | Saturday 08 November 2025 14:08:17 +0000 (0:00:01.071) 0:04:49.839 ***** 2025-11-08 14:12:45.256415 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:12:45.256424 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:12:45.256432 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:12:45.256440 | orchestrator | 2025-11-08 14:12:45.256447 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-11-08 14:12:45.256461 | orchestrator | Saturday 08 November 2025 14:08:17 +0000 (0:00:00.658) 0:04:50.498 ***** 2025-11-08 14:12:45.256469 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:12:45.256477 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:12:45.256485 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:12:45.256493 | orchestrator | 2025-11-08 14:12:45.256500 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-11-08 14:12:45.256508 | orchestrator | Saturday 08 November 2025 14:08:18 +0000 (0:00:00.857) 0:04:51.355 ***** 2025-11-08 14:12:45.256516 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-08 14:12:45.256524 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-08 14:12:45.256532 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-08 14:12:45.256540 | orchestrator | 2025-11-08 14:12:45.256547 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-11-08 14:12:45.256555 | orchestrator | Saturday 08 November 2025 14:08:19 +0000 (0:00:01.233) 0:04:52.588 ***** 2025-11-08 14:12:45.256563 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-08 14:12:45.256571 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-08 14:12:45.256579 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-08 14:12:45.256586 | orchestrator | 2025-11-08 14:12:45.256594 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-11-08 14:12:45.256602 | orchestrator | Saturday 08 November 2025 14:08:21 +0000 (0:00:01.267) 0:04:53.856 ***** 2025-11-08 14:12:45.256610 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-08 14:12:45.256617 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-08 14:12:45.256625 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-08 14:12:45.256634 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-11-08 14:12:45.256647 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-11-08 14:12:45.256660 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-11-08 14:12:45.256672 | orchestrator | 2025-11-08 14:12:45.256685 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-11-08 14:12:45.256699 | orchestrator | Saturday 08 November 2025 14:08:25 +0000 (0:00:04.195) 0:04:58.051 ***** 2025-11-08 14:12:45.256713 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.256727 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.256740 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.256754 | orchestrator | 2025-11-08 14:12:45.256763 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-11-08 14:12:45.256771 | orchestrator | Saturday 08 November 2025 14:08:26 +0000 (0:00:00.604) 0:04:58.656 ***** 2025-11-08 14:12:45.256779 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.256787 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.256800 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.256812 | orchestrator | 2025-11-08 14:12:45.256826 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-11-08 14:12:45.256838 | orchestrator | Saturday 08 November 2025 14:08:26 +0000 (0:00:00.361) 0:04:59.017 ***** 2025-11-08 14:12:45.256852 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.256866 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.256915 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.256927 | orchestrator | 2025-11-08 14:12:45.256942 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-11-08 14:12:45.256950 | orchestrator | Saturday 08 November 2025 14:08:27 +0000 (0:00:01.507) 0:05:00.525 ***** 2025-11-08 14:12:45.256960 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-08 14:12:45.256969 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-08 14:12:45.256977 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-08 14:12:45.256995 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-08 14:12:45.257003 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-08 14:12:45.257011 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-08 14:12:45.257019 | orchestrator | 2025-11-08 14:12:45.257027 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-11-08 14:12:45.257040 | orchestrator | Saturday 08 November 2025 14:08:31 +0000 (0:00:03.805) 0:05:04.330 ***** 2025-11-08 14:12:45.257048 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-08 14:12:45.257056 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-08 14:12:45.257064 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-08 14:12:45.257072 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-08 14:12:45.257079 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-08 14:12:45.257087 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.257095 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.257103 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-08 14:12:45.257110 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.257118 | orchestrator | 2025-11-08 14:12:45.257126 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-11-08 14:12:45.257134 | orchestrator | Saturday 08 November 2025 14:08:35 +0000 (0:00:03.663) 0:05:07.993 ***** 2025-11-08 14:12:45.257142 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.257150 | orchestrator | 2025-11-08 14:12:45.257158 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-11-08 14:12:45.257166 | orchestrator | Saturday 08 November 2025 14:08:35 +0000 (0:00:00.138) 0:05:08.131 ***** 2025-11-08 14:12:45.257173 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.257181 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.257189 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.257197 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.257205 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.257212 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.257220 | orchestrator | 2025-11-08 14:12:45.257228 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-11-08 14:12:45.257236 | orchestrator | Saturday 08 November 2025 14:08:36 +0000 (0:00:00.666) 0:05:08.798 ***** 2025-11-08 14:12:45.257244 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-08 14:12:45.257252 | orchestrator | 2025-11-08 14:12:45.257260 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-11-08 14:12:45.257268 | orchestrator | Saturday 08 November 2025 14:08:36 +0000 (0:00:00.758) 0:05:09.557 ***** 2025-11-08 14:12:45.257276 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.257284 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.257291 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.257299 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.257307 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.257314 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.257322 | orchestrator | 2025-11-08 14:12:45.257330 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-11-08 14:12:45.257338 | orchestrator | Saturday 08 November 2025 14:08:37 +0000 (0:00:00.959) 0:05:10.516 ***** 2025-11-08 14:12:45.257346 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257369 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257481 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257552 | orchestrator | 2025-11-08 14:12:45.257563 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-11-08 14:12:45.257575 | orchestrator | Saturday 08 November 2025 14:08:41 +0000 (0:00:03.902) 0:05:14.418 ***** 2025-11-08 14:12:45.257592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.257606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.257625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.257637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.257659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.257678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.257691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.257725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.258140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.258164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.258180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.258189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.258198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.258215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.258223 | orchestrator | 2025-11-08 14:12:45.258231 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-11-08 14:12:45.258240 | orchestrator | Saturday 08 November 2025 14:08:48 +0000 (0:00:06.801) 0:05:21.219 ***** 2025-11-08 14:12:45.258247 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.258255 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.258263 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.258271 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.258278 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.258286 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.258294 | orchestrator | 2025-11-08 14:12:45.258302 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-11-08 14:12:45.258310 | orchestrator | Saturday 08 November 2025 14:08:50 +0000 (0:00:01.530) 0:05:22.750 ***** 2025-11-08 14:12:45.258318 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-08 14:12:45.258326 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-08 14:12:45.258333 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-08 14:12:45.258347 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-08 14:12:45.258370 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-08 14:12:45.258384 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.258397 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-08 14:12:45.258409 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-08 14:12:45.258421 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.258435 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-08 14:12:45.258448 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-08 14:12:45.258462 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.258476 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-08 14:12:45.258490 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-08 14:12:45.258504 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-08 14:12:45.258518 | orchestrator | 2025-11-08 14:12:45.258531 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-11-08 14:12:45.258544 | orchestrator | Saturday 08 November 2025 14:08:54 +0000 (0:00:04.584) 0:05:27.335 ***** 2025-11-08 14:12:45.258557 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.258569 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.258591 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.258619 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.258631 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.258645 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.258668 | orchestrator | 2025-11-08 14:12:45.258683 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-11-08 14:12:45.258697 | orchestrator | Saturday 08 November 2025 14:08:55 +0000 (0:00:00.680) 0:05:28.015 ***** 2025-11-08 14:12:45.258711 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-08 14:12:45.258725 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-08 14:12:45.258739 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-08 14:12:45.258749 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-08 14:12:45.258758 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-08 14:12:45.258767 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-08 14:12:45.258775 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-08 14:12:45.258784 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-08 14:12:45.258793 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-08 14:12:45.258802 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-08 14:12:45.258811 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.258820 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-08 14:12:45.258828 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-08 14:12:45.258837 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.258846 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.258855 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-08 14:12:45.258864 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-08 14:12:45.258895 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-08 14:12:45.258903 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-08 14:12:45.258911 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-08 14:12:45.258919 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-08 14:12:45.258927 | orchestrator | 2025-11-08 14:12:45.258935 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-11-08 14:12:45.258943 | orchestrator | Saturday 08 November 2025 14:09:00 +0000 (0:00:05.528) 0:05:33.544 ***** 2025-11-08 14:12:45.258950 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-08 14:12:45.258959 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-08 14:12:45.258973 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-08 14:12:45.258982 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-08 14:12:45.258990 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-08 14:12:45.258997 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-08 14:12:45.259011 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-08 14:12:45.259019 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-08 14:12:45.259027 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-08 14:12:45.259035 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-08 14:12:45.259043 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-08 14:12:45.259050 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-08 14:12:45.259058 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-08 14:12:45.259066 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.259074 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-08 14:12:45.259087 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-08 14:12:45.259095 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.259103 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-08 14:12:45.259110 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.259118 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-08 14:12:45.259126 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-08 14:12:45.259134 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-08 14:12:45.259142 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-08 14:12:45.259149 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-08 14:12:45.259157 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-08 14:12:45.259165 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-08 14:12:45.259173 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-08 14:12:45.259181 | orchestrator | 2025-11-08 14:12:45.259188 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-11-08 14:12:45.259196 | orchestrator | Saturday 08 November 2025 14:09:07 +0000 (0:00:07.095) 0:05:40.639 ***** 2025-11-08 14:12:45.259204 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.259212 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.259220 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.259227 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.259235 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.259243 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.259251 | orchestrator | 2025-11-08 14:12:45.259259 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-11-08 14:12:45.259266 | orchestrator | Saturday 08 November 2025 14:09:08 +0000 (0:00:00.850) 0:05:41.490 ***** 2025-11-08 14:12:45.259274 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.259282 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.259290 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.259298 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.259306 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.259313 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.259321 | orchestrator | 2025-11-08 14:12:45.259329 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-11-08 14:12:45.259337 | orchestrator | Saturday 08 November 2025 14:09:09 +0000 (0:00:00.686) 0:05:42.177 ***** 2025-11-08 14:12:45.259345 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.259352 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.259366 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.259374 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.259381 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.259389 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.259397 | orchestrator | 2025-11-08 14:12:45.259405 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-11-08 14:12:45.259413 | orchestrator | Saturday 08 November 2025 14:09:11 +0000 (0:00:02.296) 0:05:44.473 ***** 2025-11-08 14:12:45.259426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.259435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-08 14:12:45.259449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.259458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.259466 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.259475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.259488 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.259496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.259511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.259519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.259527 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.259541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-08 14:12:45.259549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-08 14:12:45.259558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.259574 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.259582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-08 14:12:45.259597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.259605 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.259613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-08 14:12:45.259626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-08 14:12:45.259635 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.259649 | orchestrator | 2025-11-08 14:12:45.259663 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-11-08 14:12:45.259675 | orchestrator | Saturday 08 November 2025 14:09:13 +0000 (0:00:01.752) 0:05:46.226 ***** 2025-11-08 14:12:45.259689 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-11-08 14:12:45.259703 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-11-08 14:12:45.259716 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.259729 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-11-08 14:12:45.259737 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-11-08 14:12:45.259751 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.259759 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-11-08 14:12:45.259767 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-11-08 14:12:45.259775 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.259783 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-11-08 14:12:45.259790 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-11-08 14:12:45.259798 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.259806 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-11-08 14:12:45.259814 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-11-08 14:12:45.259822 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.259829 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-11-08 14:12:45.259837 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-11-08 14:12:45.259845 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.259852 | orchestrator | 2025-11-08 14:12:45.259860 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-11-08 14:12:45.259917 | orchestrator | Saturday 08 November 2025 14:09:14 +0000 (0:00:01.032) 0:05:47.258 ***** 2025-11-08 14:12:45.259929 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.259944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.259953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-08 14:12:45.259968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.259976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.259985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.259993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.260007 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-08 14:12:45.260056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-08 14:12:45.260069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.260083 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.260091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.260100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.260319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.260334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-08 14:12:45.260342 | orchestrator | 2025-11-08 14:12:45.260350 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-08 14:12:45.260358 | orchestrator | Saturday 08 November 2025 14:09:17 +0000 (0:00:03.012) 0:05:50.271 ***** 2025-11-08 14:12:45.260377 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.260386 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.260393 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.260401 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.260409 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.260417 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.260424 | orchestrator | 2025-11-08 14:12:45.260432 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-08 14:12:45.260440 | orchestrator | Saturday 08 November 2025 14:09:18 +0000 (0:00:00.969) 0:05:51.240 ***** 2025-11-08 14:12:45.260448 | orchestrator | 2025-11-08 14:12:45.260455 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-08 14:12:45.260463 | orchestrator | Saturday 08 November 2025 14:09:18 +0000 (0:00:00.146) 0:05:51.386 ***** 2025-11-08 14:12:45.260470 | orchestrator | 2025-11-08 14:12:45.260477 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-08 14:12:45.260484 | orchestrator | Saturday 08 November 2025 14:09:18 +0000 (0:00:00.138) 0:05:51.525 ***** 2025-11-08 14:12:45.260490 | orchestrator | 2025-11-08 14:12:45.260497 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-08 14:12:45.260504 | orchestrator | Saturday 08 November 2025 14:09:19 +0000 (0:00:00.145) 0:05:51.670 ***** 2025-11-08 14:12:45.260510 | orchestrator | 2025-11-08 14:12:45.260517 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-08 14:12:45.260523 | orchestrator | Saturday 08 November 2025 14:09:19 +0000 (0:00:00.134) 0:05:51.805 ***** 2025-11-08 14:12:45.260530 | orchestrator | 2025-11-08 14:12:45.260537 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-08 14:12:45.260543 | orchestrator | Saturday 08 November 2025 14:09:19 +0000 (0:00:00.146) 0:05:51.951 ***** 2025-11-08 14:12:45.260550 | orchestrator | 2025-11-08 14:12:45.260556 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-11-08 14:12:45.260563 | orchestrator | Saturday 08 November 2025 14:09:19 +0000 (0:00:00.316) 0:05:52.268 ***** 2025-11-08 14:12:45.260569 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:12:45.260576 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.260582 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:12:45.260589 | orchestrator | 2025-11-08 14:12:45.260595 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-11-08 14:12:45.260602 | orchestrator | Saturday 08 November 2025 14:09:32 +0000 (0:00:12.646) 0:06:04.915 ***** 2025-11-08 14:12:45.260609 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.260615 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:12:45.260622 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:12:45.260628 | orchestrator | 2025-11-08 14:12:45.260636 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-11-08 14:12:45.260647 | orchestrator | Saturday 08 November 2025 14:09:52 +0000 (0:00:19.971) 0:06:24.886 ***** 2025-11-08 14:12:45.260658 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.260669 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.260680 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.260691 | orchestrator | 2025-11-08 14:12:45.260702 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-11-08 14:12:45.260713 | orchestrator | Saturday 08 November 2025 14:10:17 +0000 (0:00:24.935) 0:06:49.822 ***** 2025-11-08 14:12:45.260724 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.260735 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.260744 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.260751 | orchestrator | 2025-11-08 14:12:45.260757 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-11-08 14:12:45.260764 | orchestrator | Saturday 08 November 2025 14:10:58 +0000 (0:00:41.301) 0:07:31.123 ***** 2025-11-08 14:12:45.260771 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.260777 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.260790 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.260797 | orchestrator | 2025-11-08 14:12:45.260803 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-11-08 14:12:45.260810 | orchestrator | Saturday 08 November 2025 14:10:59 +0000 (0:00:00.865) 0:07:31.989 ***** 2025-11-08 14:12:45.260816 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.260823 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.260829 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.260836 | orchestrator | 2025-11-08 14:12:45.260843 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-11-08 14:12:45.260854 | orchestrator | Saturday 08 November 2025 14:11:00 +0000 (0:00:00.835) 0:07:32.824 ***** 2025-11-08 14:12:45.260861 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:12:45.260868 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:12:45.260896 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:12:45.260902 | orchestrator | 2025-11-08 14:12:45.260910 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-11-08 14:12:45.260918 | orchestrator | Saturday 08 November 2025 14:11:27 +0000 (0:00:27.000) 0:07:59.825 ***** 2025-11-08 14:12:45.260925 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.260933 | orchestrator | 2025-11-08 14:12:45.260940 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-11-08 14:12:45.260948 | orchestrator | Saturday 08 November 2025 14:11:27 +0000 (0:00:00.132) 0:07:59.958 ***** 2025-11-08 14:12:45.260955 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.260962 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.260970 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.260977 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.260985 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.260993 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-11-08 14:12:45.261002 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-08 14:12:45.261009 | orchestrator | 2025-11-08 14:12:45.261016 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-11-08 14:12:45.261022 | orchestrator | Saturday 08 November 2025 14:11:50 +0000 (0:00:22.772) 0:08:22.730 ***** 2025-11-08 14:12:45.261029 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.261040 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.261047 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.261053 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.261060 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.261066 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.261073 | orchestrator | 2025-11-08 14:12:45.261079 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-11-08 14:12:45.261086 | orchestrator | Saturday 08 November 2025 14:12:01 +0000 (0:00:11.318) 0:08:34.049 ***** 2025-11-08 14:12:45.261093 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.261099 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.261106 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.261112 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.261119 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.261125 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-11-08 14:12:45.261132 | orchestrator | 2025-11-08 14:12:45.261138 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-08 14:12:45.261145 | orchestrator | Saturday 08 November 2025 14:12:06 +0000 (0:00:04.793) 0:08:38.842 ***** 2025-11-08 14:12:45.261152 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-08 14:12:45.261159 | orchestrator | 2025-11-08 14:12:45.261165 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-08 14:12:45.261172 | orchestrator | Saturday 08 November 2025 14:12:20 +0000 (0:00:14.021) 0:08:52.864 ***** 2025-11-08 14:12:45.261183 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-08 14:12:45.261190 | orchestrator | 2025-11-08 14:12:45.261196 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-11-08 14:12:45.261203 | orchestrator | Saturday 08 November 2025 14:12:21 +0000 (0:00:01.451) 0:08:54.315 ***** 2025-11-08 14:12:45.261209 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.261216 | orchestrator | 2025-11-08 14:12:45.261222 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-11-08 14:12:45.261229 | orchestrator | Saturday 08 November 2025 14:12:23 +0000 (0:00:01.373) 0:08:55.689 ***** 2025-11-08 14:12:45.261235 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-08 14:12:45.261242 | orchestrator | 2025-11-08 14:12:45.261248 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-11-08 14:12:45.261255 | orchestrator | Saturday 08 November 2025 14:12:35 +0000 (0:00:11.955) 0:09:07.645 ***** 2025-11-08 14:12:45.261262 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:12:45.261268 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:12:45.261275 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:12:45.261281 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:12:45.261288 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:12:45.261294 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:12:45.261301 | orchestrator | 2025-11-08 14:12:45.261307 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-11-08 14:12:45.261314 | orchestrator | 2025-11-08 14:12:45.261320 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-11-08 14:12:45.261327 | orchestrator | Saturday 08 November 2025 14:12:36 +0000 (0:00:01.975) 0:09:09.620 ***** 2025-11-08 14:12:45.261334 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:12:45.261340 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:12:45.261347 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:12:45.261353 | orchestrator | 2025-11-08 14:12:45.261360 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-11-08 14:12:45.261366 | orchestrator | 2025-11-08 14:12:45.261373 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-11-08 14:12:45.261380 | orchestrator | Saturday 08 November 2025 14:12:38 +0000 (0:00:01.565) 0:09:11.185 ***** 2025-11-08 14:12:45.261386 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.261393 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.261399 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.261406 | orchestrator | 2025-11-08 14:12:45.261412 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-11-08 14:12:45.261419 | orchestrator | 2025-11-08 14:12:45.261425 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-11-08 14:12:45.261432 | orchestrator | Saturday 08 November 2025 14:12:39 +0000 (0:00:00.698) 0:09:11.884 ***** 2025-11-08 14:12:45.261439 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-11-08 14:12:45.261449 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-11-08 14:12:45.261456 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-11-08 14:12:45.261463 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-11-08 14:12:45.261469 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-11-08 14:12:45.261476 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-11-08 14:12:45.261483 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:12:45.261489 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-11-08 14:12:45.261496 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-11-08 14:12:45.261503 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-11-08 14:12:45.261509 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-11-08 14:12:45.261516 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-11-08 14:12:45.261528 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-11-08 14:12:45.261535 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:12:45.261541 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-11-08 14:12:45.261548 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-11-08 14:12:45.261555 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-11-08 14:12:45.261561 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-11-08 14:12:45.261568 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-11-08 14:12:45.261574 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-11-08 14:12:45.261585 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:12:45.261592 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-11-08 14:12:45.261599 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-11-08 14:12:45.261605 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-11-08 14:12:45.261612 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-11-08 14:12:45.261619 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-11-08 14:12:45.261625 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-11-08 14:12:45.261632 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.261643 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-11-08 14:12:45.261654 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-11-08 14:12:45.261664 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-11-08 14:12:45.261676 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-11-08 14:12:45.261687 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-11-08 14:12:45.261698 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-11-08 14:12:45.261710 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.261719 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-11-08 14:12:45.261726 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-11-08 14:12:45.261732 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-11-08 14:12:45.261739 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-11-08 14:12:45.261745 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-11-08 14:12:45.261752 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-11-08 14:12:45.261758 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.261765 | orchestrator | 2025-11-08 14:12:45.261771 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-11-08 14:12:45.261778 | orchestrator | 2025-11-08 14:12:45.261785 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-11-08 14:12:45.261791 | orchestrator | Saturday 08 November 2025 14:12:40 +0000 (0:00:01.585) 0:09:13.469 ***** 2025-11-08 14:12:45.261798 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-11-08 14:12:45.261804 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-11-08 14:12:45.261811 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.261817 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-11-08 14:12:45.261824 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-11-08 14:12:45.261830 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.261837 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-11-08 14:12:45.261843 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-11-08 14:12:45.261850 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.261856 | orchestrator | 2025-11-08 14:12:45.261863 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-11-08 14:12:45.261889 | orchestrator | 2025-11-08 14:12:45.261897 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-11-08 14:12:45.261910 | orchestrator | Saturday 08 November 2025 14:12:41 +0000 (0:00:00.868) 0:09:14.338 ***** 2025-11-08 14:12:45.261917 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.261924 | orchestrator | 2025-11-08 14:12:45.261930 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-11-08 14:12:45.261937 | orchestrator | 2025-11-08 14:12:45.261944 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-11-08 14:12:45.261950 | orchestrator | Saturday 08 November 2025 14:12:42 +0000 (0:00:00.775) 0:09:15.113 ***** 2025-11-08 14:12:45.261957 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:12:45.261963 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:12:45.261970 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:12:45.261976 | orchestrator | 2025-11-08 14:12:45.261983 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:12:45.261990 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:12:45.262003 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-11-08 14:12:45.262064 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-11-08 14:12:45.262074 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-11-08 14:12:45.262081 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-11-08 14:12:45.262087 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-11-08 14:12:45.262094 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-11-08 14:12:45.262101 | orchestrator | 2025-11-08 14:12:45.262107 | orchestrator | 2025-11-08 14:12:45.262114 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:12:45.262121 | orchestrator | Saturday 08 November 2025 14:12:42 +0000 (0:00:00.441) 0:09:15.555 ***** 2025-11-08 14:12:45.262135 | orchestrator | =============================================================================== 2025-11-08 14:12:45.262142 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 41.30s 2025-11-08 14:12:45.262148 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.56s 2025-11-08 14:12:45.262155 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.00s 2025-11-08 14:12:45.262162 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.94s 2025-11-08 14:12:45.262168 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.30s 2025-11-08 14:12:45.262175 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.77s 2025-11-08 14:12:45.262182 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.56s 2025-11-08 14:12:45.262188 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.97s 2025-11-08 14:12:45.262195 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.79s 2025-11-08 14:12:45.262201 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.17s 2025-11-08 14:12:45.262208 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.02s 2025-11-08 14:12:45.262215 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.55s 2025-11-08 14:12:45.262221 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.45s 2025-11-08 14:12:45.262228 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.07s 2025-11-08 14:12:45.262243 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.65s 2025-11-08 14:12:45.262250 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.96s 2025-11-08 14:12:45.262256 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.32s 2025-11-08 14:12:45.262263 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.03s 2025-11-08 14:12:45.262270 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.89s 2025-11-08 14:12:45.262276 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.65s 2025-11-08 14:12:48.297802 | orchestrator | 2025-11-08 14:12:48 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:48.297998 | orchestrator | 2025-11-08 14:12:48 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:51.348679 | orchestrator | 2025-11-08 14:12:51 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:51.348800 | orchestrator | 2025-11-08 14:12:51 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:54.396818 | orchestrator | 2025-11-08 14:12:54 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:54.396964 | orchestrator | 2025-11-08 14:12:54 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:12:57.447767 | orchestrator | 2025-11-08 14:12:57 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state STARTED 2025-11-08 14:12:57.447947 | orchestrator | 2025-11-08 14:12:57 | INFO  | Wait 1 second(s) until the next check 2025-11-08 14:13:00.493784 | orchestrator | 2025-11-08 14:13:00.493923 | orchestrator | 2025-11-08 14:13:00.493943 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:13:00.493954 | orchestrator | 2025-11-08 14:13:00.493964 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:13:00.493976 | orchestrator | Saturday 08 November 2025 14:07:52 +0000 (0:00:00.289) 0:00:00.289 ***** 2025-11-08 14:13:00.493988 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:13:00.494002 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:13:00.494014 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:13:00.494070 | orchestrator | 2025-11-08 14:13:00.494081 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:13:00.494089 | orchestrator | Saturday 08 November 2025 14:07:53 +0000 (0:00:00.324) 0:00:00.614 ***** 2025-11-08 14:13:00.494100 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-11-08 14:13:00.494108 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-11-08 14:13:00.494115 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-11-08 14:13:00.494122 | orchestrator | 2025-11-08 14:13:00.494129 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-11-08 14:13:00.494136 | orchestrator | 2025-11-08 14:13:00.494143 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-08 14:13:00.494150 | orchestrator | Saturday 08 November 2025 14:07:53 +0000 (0:00:00.462) 0:00:01.077 ***** 2025-11-08 14:13:00.494158 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:13:00.494166 | orchestrator | 2025-11-08 14:13:00.494173 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-11-08 14:13:00.494180 | orchestrator | Saturday 08 November 2025 14:07:54 +0000 (0:00:00.780) 0:00:01.858 ***** 2025-11-08 14:13:00.494187 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-11-08 14:13:00.494193 | orchestrator | 2025-11-08 14:13:00.494200 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-11-08 14:13:00.494207 | orchestrator | Saturday 08 November 2025 14:07:58 +0000 (0:00:04.010) 0:00:05.868 ***** 2025-11-08 14:13:00.494213 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-11-08 14:13:00.494258 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-11-08 14:13:00.494270 | orchestrator | 2025-11-08 14:13:00.494282 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-11-08 14:13:00.494293 | orchestrator | Saturday 08 November 2025 14:08:05 +0000 (0:00:07.062) 0:00:12.931 ***** 2025-11-08 14:13:00.494305 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-08 14:13:00.494317 | orchestrator | 2025-11-08 14:13:00.494325 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-11-08 14:13:00.494333 | orchestrator | Saturday 08 November 2025 14:08:09 +0000 (0:00:03.739) 0:00:16.670 ***** 2025-11-08 14:13:00.494341 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-08 14:13:00.494349 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-11-08 14:13:00.494361 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-11-08 14:13:00.494371 | orchestrator | 2025-11-08 14:13:00.494382 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-11-08 14:13:00.494393 | orchestrator | Saturday 08 November 2025 14:08:17 +0000 (0:00:08.600) 0:00:25.271 ***** 2025-11-08 14:13:00.494404 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-08 14:13:00.494416 | orchestrator | 2025-11-08 14:13:00.494428 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-11-08 14:13:00.494440 | orchestrator | Saturday 08 November 2025 14:08:21 +0000 (0:00:03.657) 0:00:28.929 ***** 2025-11-08 14:13:00.494450 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-11-08 14:13:00.494461 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-11-08 14:13:00.494471 | orchestrator | 2025-11-08 14:13:00.494478 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-11-08 14:13:00.494486 | orchestrator | Saturday 08 November 2025 14:08:29 +0000 (0:00:07.767) 0:00:36.697 ***** 2025-11-08 14:13:00.494494 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-11-08 14:13:00.494501 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-11-08 14:13:00.494509 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-11-08 14:13:00.494515 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-11-08 14:13:00.494522 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-11-08 14:13:00.494528 | orchestrator | 2025-11-08 14:13:00.494535 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-08 14:13:00.494542 | orchestrator | Saturday 08 November 2025 14:08:45 +0000 (0:00:16.296) 0:00:52.994 ***** 2025-11-08 14:13:00.494548 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:13:00.494555 | orchestrator | 2025-11-08 14:13:00.494562 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-11-08 14:13:00.494568 | orchestrator | Saturday 08 November 2025 14:08:46 +0000 (0:00:00.719) 0:00:53.714 ***** 2025-11-08 14:13:00.494575 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.494581 | orchestrator | 2025-11-08 14:13:00.494588 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-11-08 14:13:00.494595 | orchestrator | Saturday 08 November 2025 14:08:52 +0000 (0:00:05.873) 0:00:59.588 ***** 2025-11-08 14:13:00.494601 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.494608 | orchestrator | 2025-11-08 14:13:00.494615 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-11-08 14:13:00.494636 | orchestrator | Saturday 08 November 2025 14:08:57 +0000 (0:00:05.032) 0:01:04.621 ***** 2025-11-08 14:13:00.494643 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:13:00.494650 | orchestrator | 2025-11-08 14:13:00.494656 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-11-08 14:13:00.494671 | orchestrator | Saturday 08 November 2025 14:09:00 +0000 (0:00:03.397) 0:01:08.018 ***** 2025-11-08 14:13:00.494678 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-11-08 14:13:00.494684 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-11-08 14:13:00.494691 | orchestrator | 2025-11-08 14:13:00.494698 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-11-08 14:13:00.494704 | orchestrator | Saturday 08 November 2025 14:09:11 +0000 (0:00:10.886) 0:01:18.905 ***** 2025-11-08 14:13:00.494711 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-11-08 14:13:00.494718 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-11-08 14:13:00.494727 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-11-08 14:13:00.494735 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-11-08 14:13:00.494741 | orchestrator | 2025-11-08 14:13:00.494748 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-11-08 14:13:00.494755 | orchestrator | Saturday 08 November 2025 14:09:27 +0000 (0:00:15.777) 0:01:34.683 ***** 2025-11-08 14:13:00.494761 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.494768 | orchestrator | 2025-11-08 14:13:00.494774 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-11-08 14:13:00.494781 | orchestrator | Saturday 08 November 2025 14:09:31 +0000 (0:00:04.406) 0:01:39.090 ***** 2025-11-08 14:13:00.494787 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.494794 | orchestrator | 2025-11-08 14:13:00.494801 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-11-08 14:13:00.494812 | orchestrator | Saturday 08 November 2025 14:09:37 +0000 (0:00:05.916) 0:01:45.006 ***** 2025-11-08 14:13:00.494819 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:13:00.494826 | orchestrator | 2025-11-08 14:13:00.494832 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-11-08 14:13:00.494839 | orchestrator | Saturday 08 November 2025 14:09:37 +0000 (0:00:00.243) 0:01:45.250 ***** 2025-11-08 14:13:00.494845 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:13:00.494852 | orchestrator | 2025-11-08 14:13:00.494858 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-08 14:13:00.494865 | orchestrator | Saturday 08 November 2025 14:09:42 +0000 (0:00:04.855) 0:01:50.106 ***** 2025-11-08 14:13:00.494871 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:13:00.494924 | orchestrator | 2025-11-08 14:13:00.494933 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-11-08 14:13:00.494940 | orchestrator | Saturday 08 November 2025 14:09:43 +0000 (0:00:01.162) 0:01:51.269 ***** 2025-11-08 14:13:00.494946 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.494953 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.494960 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.494967 | orchestrator | 2025-11-08 14:13:00.494973 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-11-08 14:13:00.494980 | orchestrator | Saturday 08 November 2025 14:09:50 +0000 (0:00:06.191) 0:01:57.460 ***** 2025-11-08 14:13:00.494986 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.494993 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.494999 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.495006 | orchestrator | 2025-11-08 14:13:00.495012 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-11-08 14:13:00.495019 | orchestrator | Saturday 08 November 2025 14:09:54 +0000 (0:00:04.824) 0:02:02.285 ***** 2025-11-08 14:13:00.495026 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.495039 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.495045 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.495052 | orchestrator | 2025-11-08 14:13:00.495058 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-11-08 14:13:00.495065 | orchestrator | Saturday 08 November 2025 14:09:55 +0000 (0:00:00.904) 0:02:03.189 ***** 2025-11-08 14:13:00.495072 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:13:00.495078 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:13:00.495085 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:13:00.495092 | orchestrator | 2025-11-08 14:13:00.495098 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-11-08 14:13:00.495105 | orchestrator | Saturday 08 November 2025 14:09:58 +0000 (0:00:02.461) 0:02:05.650 ***** 2025-11-08 14:13:00.495112 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.495118 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.495125 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.495131 | orchestrator | 2025-11-08 14:13:00.495138 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-11-08 14:13:00.495145 | orchestrator | Saturday 08 November 2025 14:09:59 +0000 (0:00:01.581) 0:02:07.232 ***** 2025-11-08 14:13:00.495151 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.495158 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.495165 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.495171 | orchestrator | 2025-11-08 14:13:00.495178 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-11-08 14:13:00.495185 | orchestrator | Saturday 08 November 2025 14:10:01 +0000 (0:00:01.292) 0:02:08.524 ***** 2025-11-08 14:13:00.495191 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.495198 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.495205 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.495211 | orchestrator | 2025-11-08 14:13:00.495272 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-11-08 14:13:00.495280 | orchestrator | Saturday 08 November 2025 14:10:03 +0000 (0:00:02.056) 0:02:10.581 ***** 2025-11-08 14:13:00.495287 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.495294 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.495300 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.495307 | orchestrator | 2025-11-08 14:13:00.495314 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-11-08 14:13:00.495320 | orchestrator | Saturday 08 November 2025 14:10:05 +0000 (0:00:01.806) 0:02:12.387 ***** 2025-11-08 14:13:00.495327 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:13:00.495334 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:13:00.495340 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:13:00.495347 | orchestrator | 2025-11-08 14:13:00.495354 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-11-08 14:13:00.495361 | orchestrator | Saturday 08 November 2025 14:10:05 +0000 (0:00:00.661) 0:02:13.048 ***** 2025-11-08 14:13:00.495367 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:13:00.495374 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:13:00.495380 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:13:00.495387 | orchestrator | 2025-11-08 14:13:00.495394 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-08 14:13:00.495400 | orchestrator | Saturday 08 November 2025 14:10:09 +0000 (0:00:03.856) 0:02:16.904 ***** 2025-11-08 14:13:00.495407 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:13:00.495414 | orchestrator | 2025-11-08 14:13:00.495421 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-11-08 14:13:00.495427 | orchestrator | Saturday 08 November 2025 14:10:10 +0000 (0:00:00.765) 0:02:17.670 ***** 2025-11-08 14:13:00.495434 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:13:00.495441 | orchestrator | 2025-11-08 14:13:00.495447 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-11-08 14:13:00.495460 | orchestrator | Saturday 08 November 2025 14:10:14 +0000 (0:00:04.212) 0:02:21.882 ***** 2025-11-08 14:13:00.495466 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:13:00.495473 | orchestrator | 2025-11-08 14:13:00.495480 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-11-08 14:13:00.495491 | orchestrator | Saturday 08 November 2025 14:10:17 +0000 (0:00:03.297) 0:02:25.179 ***** 2025-11-08 14:13:00.495498 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-11-08 14:13:00.495505 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-11-08 14:13:00.495512 | orchestrator | 2025-11-08 14:13:00.495518 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-11-08 14:13:00.495525 | orchestrator | Saturday 08 November 2025 14:10:24 +0000 (0:00:06.993) 0:02:32.173 ***** 2025-11-08 14:13:00.495532 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:13:00.495538 | orchestrator | 2025-11-08 14:13:00.495545 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-11-08 14:13:00.495552 | orchestrator | Saturday 08 November 2025 14:10:28 +0000 (0:00:03.690) 0:02:35.863 ***** 2025-11-08 14:13:00.495558 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:13:00.495565 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:13:00.495572 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:13:00.495579 | orchestrator | 2025-11-08 14:13:00.495585 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-11-08 14:13:00.495592 | orchestrator | Saturday 08 November 2025 14:10:28 +0000 (0:00:00.364) 0:02:36.228 ***** 2025-11-08 14:13:00.495602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.495637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.495646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.495660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.495673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.495680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.495688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.495697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.495724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.495733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.495745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.495756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.495764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.495771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.495778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.495785 | orchestrator | 2025-11-08 14:13:00.495792 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-11-08 14:13:00.495799 | orchestrator | Saturday 08 November 2025 14:10:31 +0000 (0:00:02.655) 0:02:38.883 ***** 2025-11-08 14:13:00.495805 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:13:00.495812 | orchestrator | 2025-11-08 14:13:00.495838 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-11-08 14:13:00.495846 | orchestrator | Saturday 08 November 2025 14:10:31 +0000 (0:00:00.162) 0:02:39.046 ***** 2025-11-08 14:13:00.495858 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:13:00.495864 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:13:00.495871 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:13:00.495913 | orchestrator | 2025-11-08 14:13:00.495927 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-11-08 14:13:00.495939 | orchestrator | Saturday 08 November 2025 14:10:32 +0000 (0:00:00.590) 0:02:39.637 ***** 2025-11-08 14:13:00.495953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 14:13:00.495969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 14:13:00.495977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.495984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.495991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:13:00.495998 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:13:00.496033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 14:13:00.496047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 14:13:00.496062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:13:00.496084 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:13:00.496091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 14:13:00.496126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 14:13:00.496134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:13:00.496159 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:13:00.496165 | orchestrator | 2025-11-08 14:13:00.496172 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-08 14:13:00.496179 | orchestrator | Saturday 08 November 2025 14:10:33 +0000 (0:00:00.785) 0:02:40.422 ***** 2025-11-08 14:13:00.496186 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:13:00.496193 | orchestrator | 2025-11-08 14:13:00.496199 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-11-08 14:13:00.496206 | orchestrator | Saturday 08 November 2025 14:10:33 +0000 (0:00:00.609) 0:02:41.031 ***** 2025-11-08 14:13:00.496213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.496246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'hapr2025-11-08 14:13:00 | INFO  | Task be7ee159-6b59-4112-9f22-94b837336d63 is in state SUCCESS 2025-11-08 14:13:00.496256 | orchestrator | oxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.496265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.496276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.496284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.496290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.496298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.496334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.496347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.496358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.496374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.496386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.496395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.496413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.496432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.496444 | orchestrator | 2025-11-08 14:13:00.496455 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-11-08 14:13:00.496465 | orchestrator | Saturday 08 November 2025 14:10:38 +0000 (0:00:05.323) 0:02:46.355 ***** 2025-11-08 14:13:00.496475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 14:13:00.496491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 14:13:00.496503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:13:00.496541 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:13:00.496563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 14:13:00.496574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 14:13:00.496586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:13:00.496632 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:13:00.496642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 14:13:00.496659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 14:13:00.496671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:13:00.496708 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:13:00.496719 | orchestrator | 2025-11-08 14:13:00.496730 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-11-08 14:13:00.496741 | orchestrator | Saturday 08 November 2025 14:10:40 +0000 (0:00:01.304) 0:02:47.660 ***** 2025-11-08 14:13:00.496759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 14:13:00.496771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 14:13:00.496788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:13:00.496822 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:13:00.496838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 14:13:00.496860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 14:13:00.496871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.496940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:13:00.496952 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:13:00.496964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-08 14:13:00.496980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-08 14:13:00.497001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.497012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-08 14:13:00.497024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-08 14:13:00.497037 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:13:00.497049 | orchestrator | 2025-11-08 14:13:00.497060 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-11-08 14:13:00.497071 | orchestrator | Saturday 08 November 2025 14:10:41 +0000 (0:00:01.087) 0:02:48.747 ***** 2025-11-08 14:13:00.497089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.497106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.497127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.497139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.497151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.497170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.497183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497322 | orchestrator | 2025-11-08 14:13:00.497333 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-11-08 14:13:00.497352 | orchestrator | Saturday 08 November 2025 14:10:46 +0000 (0:00:05.350) 0:02:54.097 ***** 2025-11-08 14:13:00.497363 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-08 14:13:00.497375 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-08 14:13:00.497382 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-08 14:13:00.497389 | orchestrator | 2025-11-08 14:13:00.497395 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-11-08 14:13:00.497402 | orchestrator | Saturday 08 November 2025 14:10:48 +0000 (0:00:02.071) 0:02:56.169 ***** 2025-11-08 14:13:00.497409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.497417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.497432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.497445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.497459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.497470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.497481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.497584 | orchestrator | 2025-11-08 14:13:00.497591 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-11-08 14:13:00.497597 | orchestrator | Saturday 08 November 2025 14:11:09 +0000 (0:00:21.096) 0:03:17.266 ***** 2025-11-08 14:13:00.497604 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.497611 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.497617 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.497624 | orchestrator | 2025-11-08 14:13:00.497631 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-11-08 14:13:00.497637 | orchestrator | Saturday 08 November 2025 14:11:11 +0000 (0:00:01.959) 0:03:19.226 ***** 2025-11-08 14:13:00.497649 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-08 14:13:00.497656 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-08 14:13:00.497667 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-08 14:13:00.497674 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-08 14:13:00.497681 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-08 14:13:00.497687 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-08 14:13:00.497694 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-08 14:13:00.497701 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-08 14:13:00.497707 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-08 14:13:00.497714 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-08 14:13:00.497720 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-08 14:13:00.497727 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-08 14:13:00.497733 | orchestrator | 2025-11-08 14:13:00.497740 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-11-08 14:13:00.497746 | orchestrator | Saturday 08 November 2025 14:11:17 +0000 (0:00:05.755) 0:03:24.982 ***** 2025-11-08 14:13:00.497753 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-08 14:13:00.497760 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-08 14:13:00.497766 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-08 14:13:00.497773 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-08 14:13:00.497779 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-08 14:13:00.497786 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-08 14:13:00.497793 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-08 14:13:00.497799 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-08 14:13:00.497806 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-08 14:13:00.497816 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-08 14:13:00.497823 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-08 14:13:00.497829 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-08 14:13:00.497836 | orchestrator | 2025-11-08 14:13:00.497842 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-11-08 14:13:00.497849 | orchestrator | Saturday 08 November 2025 14:11:23 +0000 (0:00:05.893) 0:03:30.875 ***** 2025-11-08 14:13:00.497855 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-08 14:13:00.497862 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-08 14:13:00.497868 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-08 14:13:00.497875 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-08 14:13:00.497898 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-08 14:13:00.497905 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-08 14:13:00.497911 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-08 14:13:00.497918 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-08 14:13:00.497925 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-08 14:13:00.497931 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-08 14:13:00.497938 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-08 14:13:00.497944 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-08 14:13:00.497951 | orchestrator | 2025-11-08 14:13:00.497958 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-11-08 14:13:00.497964 | orchestrator | Saturday 08 November 2025 14:11:29 +0000 (0:00:05.915) 0:03:36.790 ***** 2025-11-08 14:13:00.497971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.497991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.497999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-08 14:13:00.498010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.498060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.498067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-08 14:13:00.498080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.498094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.498101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.498108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.498119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.498126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-08 14:13:00.498138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.498145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.498156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-08 14:13:00.498163 | orchestrator | 2025-11-08 14:13:00.498170 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-08 14:13:00.498177 | orchestrator | Saturday 08 November 2025 14:11:34 +0000 (0:00:05.354) 0:03:42.144 ***** 2025-11-08 14:13:00.498184 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:13:00.498191 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:13:00.498199 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:13:00.498210 | orchestrator | 2025-11-08 14:13:00.498222 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-11-08 14:13:00.498232 | orchestrator | Saturday 08 November 2025 14:11:35 +0000 (0:00:00.340) 0:03:42.485 ***** 2025-11-08 14:13:00.498242 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.498254 | orchestrator | 2025-11-08 14:13:00.498265 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-11-08 14:13:00.498277 | orchestrator | Saturday 08 November 2025 14:11:37 +0000 (0:00:02.189) 0:03:44.675 ***** 2025-11-08 14:13:00.498286 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.498292 | orchestrator | 2025-11-08 14:13:00.498299 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-11-08 14:13:00.498306 | orchestrator | Saturday 08 November 2025 14:11:39 +0000 (0:00:02.002) 0:03:46.677 ***** 2025-11-08 14:13:00.498313 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.498319 | orchestrator | 2025-11-08 14:13:00.498326 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-11-08 14:13:00.498333 | orchestrator | Saturday 08 November 2025 14:11:41 +0000 (0:00:02.258) 0:03:48.936 ***** 2025-11-08 14:13:00.498340 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.498347 | orchestrator | 2025-11-08 14:13:00.498354 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-11-08 14:13:00.498360 | orchestrator | Saturday 08 November 2025 14:11:44 +0000 (0:00:02.841) 0:03:51.778 ***** 2025-11-08 14:13:00.498371 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.498378 | orchestrator | 2025-11-08 14:13:00.498384 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-08 14:13:00.498397 | orchestrator | Saturday 08 November 2025 14:12:06 +0000 (0:00:22.411) 0:04:14.189 ***** 2025-11-08 14:13:00.498403 | orchestrator | 2025-11-08 14:13:00.498410 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-08 14:13:00.498417 | orchestrator | Saturday 08 November 2025 14:12:06 +0000 (0:00:00.068) 0:04:14.257 ***** 2025-11-08 14:13:00.498429 | orchestrator | 2025-11-08 14:13:00.498438 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-08 14:13:00.498449 | orchestrator | Saturday 08 November 2025 14:12:06 +0000 (0:00:00.088) 0:04:14.346 ***** 2025-11-08 14:13:00.498460 | orchestrator | 2025-11-08 14:13:00.498471 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-11-08 14:13:00.498482 | orchestrator | Saturday 08 November 2025 14:12:07 +0000 (0:00:00.084) 0:04:14.430 ***** 2025-11-08 14:13:00.498494 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.498505 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.498516 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.498527 | orchestrator | 2025-11-08 14:13:00.498535 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-11-08 14:13:00.498541 | orchestrator | Saturday 08 November 2025 14:12:20 +0000 (0:00:13.160) 0:04:27.591 ***** 2025-11-08 14:13:00.498548 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.498554 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.498561 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.498567 | orchestrator | 2025-11-08 14:13:00.498574 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-11-08 14:13:00.498581 | orchestrator | Saturday 08 November 2025 14:12:27 +0000 (0:00:07.364) 0:04:34.955 ***** 2025-11-08 14:13:00.498588 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.498594 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.498604 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.498614 | orchestrator | 2025-11-08 14:13:00.498626 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-11-08 14:13:00.498637 | orchestrator | Saturday 08 November 2025 14:12:36 +0000 (0:00:08.880) 0:04:43.836 ***** 2025-11-08 14:13:00.498648 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.498658 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.498665 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.498672 | orchestrator | 2025-11-08 14:13:00.498679 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-11-08 14:13:00.498685 | orchestrator | Saturday 08 November 2025 14:12:45 +0000 (0:00:09.040) 0:04:52.877 ***** 2025-11-08 14:13:00.498692 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:13:00.498698 | orchestrator | changed: [testbed-node-1] 2025-11-08 14:13:00.498705 | orchestrator | changed: [testbed-node-2] 2025-11-08 14:13:00.498711 | orchestrator | 2025-11-08 14:13:00.498718 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:13:00.498725 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-08 14:13:00.498733 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 14:13:00.498740 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-08 14:13:00.498747 | orchestrator | 2025-11-08 14:13:00.498754 | orchestrator | 2025-11-08 14:13:00.498760 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:13:00.498774 | orchestrator | Saturday 08 November 2025 14:12:56 +0000 (0:00:11.492) 0:05:04.369 ***** 2025-11-08 14:13:00.498781 | orchestrator | =============================================================================== 2025-11-08 14:13:00.498788 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.41s 2025-11-08 14:13:00.498794 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 21.10s 2025-11-08 14:13:00.498806 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.30s 2025-11-08 14:13:00.498813 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.78s 2025-11-08 14:13:00.498819 | orchestrator | octavia : Restart octavia-api container -------------------------------- 13.16s 2025-11-08 14:13:00.498826 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.49s 2025-11-08 14:13:00.498833 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.89s 2025-11-08 14:13:00.498839 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.04s 2025-11-08 14:13:00.498846 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.88s 2025-11-08 14:13:00.498853 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.60s 2025-11-08 14:13:00.498859 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.77s 2025-11-08 14:13:00.498866 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.36s 2025-11-08 14:13:00.498872 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.06s 2025-11-08 14:13:00.498901 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.99s 2025-11-08 14:13:00.498909 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.19s 2025-11-08 14:13:00.498916 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.92s 2025-11-08 14:13:00.498922 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.92s 2025-11-08 14:13:00.498934 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.89s 2025-11-08 14:13:00.498941 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.87s 2025-11-08 14:13:00.498947 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.76s 2025-11-08 14:13:00.498954 | orchestrator | 2025-11-08 14:13:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:03.536750 | orchestrator | 2025-11-08 14:13:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:06.574453 | orchestrator | 2025-11-08 14:13:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:09.618446 | orchestrator | 2025-11-08 14:13:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:12.665106 | orchestrator | 2025-11-08 14:13:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:15.709955 | orchestrator | 2025-11-08 14:13:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:18.756359 | orchestrator | 2025-11-08 14:13:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:21.798533 | orchestrator | 2025-11-08 14:13:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:24.847606 | orchestrator | 2025-11-08 14:13:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:27.884471 | orchestrator | 2025-11-08 14:13:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:30.919718 | orchestrator | 2025-11-08 14:13:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:33.959718 | orchestrator | 2025-11-08 14:13:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:37.001209 | orchestrator | 2025-11-08 14:13:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:40.038509 | orchestrator | 2025-11-08 14:13:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:43.079815 | orchestrator | 2025-11-08 14:13:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:46.114789 | orchestrator | 2025-11-08 14:13:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:49.159594 | orchestrator | 2025-11-08 14:13:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:52.201823 | orchestrator | 2025-11-08 14:13:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:55.238650 | orchestrator | 2025-11-08 14:13:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:13:58.278286 | orchestrator | 2025-11-08 14:13:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-08 14:14:01.321442 | orchestrator | 2025-11-08 14:14:01.713568 | orchestrator | 2025-11-08 14:14:01.717684 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Nov 8 14:14:01 UTC 2025 2025-11-08 14:14:01.717745 | orchestrator | 2025-11-08 14:14:02.062689 | orchestrator | ok: Runtime: 0:36:14.806734 2025-11-08 14:14:02.317756 | 2025-11-08 14:14:02.317900 | TASK [Bootstrap services] 2025-11-08 14:14:03.124034 | orchestrator | 2025-11-08 14:14:03.124244 | orchestrator | # BOOTSTRAP 2025-11-08 14:14:03.124269 | orchestrator | 2025-11-08 14:14:03.124284 | orchestrator | + set -e 2025-11-08 14:14:03.124298 | orchestrator | + echo 2025-11-08 14:14:03.124312 | orchestrator | + echo '# BOOTSTRAP' 2025-11-08 14:14:03.124331 | orchestrator | + echo 2025-11-08 14:14:03.124377 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-11-08 14:14:03.133110 | orchestrator | + set -e 2025-11-08 14:14:03.133232 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-11-08 14:14:05.733170 | orchestrator | 2025-11-08 14:14:05 | INFO  | It takes a moment until task 55c49e47-d55d-40a9-a3bf-f9aabd27aacf (flavor-manager) has been started and output is visible here. 2025-11-08 14:14:13.832621 | orchestrator | 2025-11-08 14:14:08 | INFO  | Flavor SCS-1L-1 created 2025-11-08 14:14:13.832776 | orchestrator | 2025-11-08 14:14:09 | INFO  | Flavor SCS-1L-1-5 created 2025-11-08 14:14:13.832800 | orchestrator | 2025-11-08 14:14:09 | INFO  | Flavor SCS-1V-2 created 2025-11-08 14:14:13.832813 | orchestrator | 2025-11-08 14:14:09 | INFO  | Flavor SCS-1V-2-5 created 2025-11-08 14:14:13.832825 | orchestrator | 2025-11-08 14:14:09 | INFO  | Flavor SCS-1V-4 created 2025-11-08 14:14:13.832837 | orchestrator | 2025-11-08 14:14:09 | INFO  | Flavor SCS-1V-4-10 created 2025-11-08 14:14:13.832848 | orchestrator | 2025-11-08 14:14:10 | INFO  | Flavor SCS-1V-8 created 2025-11-08 14:14:13.832860 | orchestrator | 2025-11-08 14:14:10 | INFO  | Flavor SCS-1V-8-20 created 2025-11-08 14:14:13.832891 | orchestrator | 2025-11-08 14:14:10 | INFO  | Flavor SCS-2V-4 created 2025-11-08 14:14:13.832903 | orchestrator | 2025-11-08 14:14:10 | INFO  | Flavor SCS-2V-4-10 created 2025-11-08 14:14:13.832944 | orchestrator | 2025-11-08 14:14:10 | INFO  | Flavor SCS-2V-8 created 2025-11-08 14:14:13.832957 | orchestrator | 2025-11-08 14:14:10 | INFO  | Flavor SCS-2V-8-20 created 2025-11-08 14:14:13.832973 | orchestrator | 2025-11-08 14:14:11 | INFO  | Flavor SCS-2V-16 created 2025-11-08 14:14:13.832991 | orchestrator | 2025-11-08 14:14:11 | INFO  | Flavor SCS-2V-16-50 created 2025-11-08 14:14:13.833009 | orchestrator | 2025-11-08 14:14:11 | INFO  | Flavor SCS-4V-8 created 2025-11-08 14:14:13.833028 | orchestrator | 2025-11-08 14:14:11 | INFO  | Flavor SCS-4V-8-20 created 2025-11-08 14:14:13.833046 | orchestrator | 2025-11-08 14:14:11 | INFO  | Flavor SCS-4V-16 created 2025-11-08 14:14:13.833064 | orchestrator | 2025-11-08 14:14:11 | INFO  | Flavor SCS-4V-16-50 created 2025-11-08 14:14:13.833083 | orchestrator | 2025-11-08 14:14:12 | INFO  | Flavor SCS-4V-32 created 2025-11-08 14:14:13.833101 | orchestrator | 2025-11-08 14:14:12 | INFO  | Flavor SCS-4V-32-100 created 2025-11-08 14:14:13.833119 | orchestrator | 2025-11-08 14:14:12 | INFO  | Flavor SCS-8V-16 created 2025-11-08 14:14:13.833137 | orchestrator | 2025-11-08 14:14:12 | INFO  | Flavor SCS-8V-16-50 created 2025-11-08 14:14:13.833157 | orchestrator | 2025-11-08 14:14:12 | INFO  | Flavor SCS-8V-32 created 2025-11-08 14:14:13.833175 | orchestrator | 2025-11-08 14:14:12 | INFO  | Flavor SCS-8V-32-100 created 2025-11-08 14:14:13.833193 | orchestrator | 2025-11-08 14:14:12 | INFO  | Flavor SCS-16V-32 created 2025-11-08 14:14:13.833213 | orchestrator | 2025-11-08 14:14:13 | INFO  | Flavor SCS-16V-32-100 created 2025-11-08 14:14:13.833231 | orchestrator | 2025-11-08 14:14:13 | INFO  | Flavor SCS-2V-4-20s created 2025-11-08 14:14:13.833250 | orchestrator | 2025-11-08 14:14:13 | INFO  | Flavor SCS-4V-8-50s created 2025-11-08 14:14:13.833270 | orchestrator | 2025-11-08 14:14:13 | INFO  | Flavor SCS-8V-32-100s created 2025-11-08 14:14:16.559680 | orchestrator | 2025-11-08 14:14:16 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-11-08 14:14:26.695310 | orchestrator | 2025-11-08 14:14:26 | INFO  | Task c7960b1c-2988-4442-9428-9a0311b3a5d5 (bootstrap-basic) was prepared for execution. 2025-11-08 14:14:26.695479 | orchestrator | 2025-11-08 14:14:26 | INFO  | It takes a moment until task c7960b1c-2988-4442-9428-9a0311b3a5d5 (bootstrap-basic) has been started and output is visible here. 2025-11-08 14:15:33.704276 | orchestrator | 2025-11-08 14:15:33.704426 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-11-08 14:15:33.704440 | orchestrator | 2025-11-08 14:15:33.704465 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-08 14:15:33.705311 | orchestrator | Saturday 08 November 2025 14:14:31 +0000 (0:00:00.074) 0:00:00.074 ***** 2025-11-08 14:15:33.705328 | orchestrator | ok: [localhost] 2025-11-08 14:15:33.705340 | orchestrator | 2025-11-08 14:15:33.705351 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-11-08 14:15:33.705360 | orchestrator | Saturday 08 November 2025 14:14:33 +0000 (0:00:02.086) 0:00:02.161 ***** 2025-11-08 14:15:33.705369 | orchestrator | ok: [localhost] 2025-11-08 14:15:33.705377 | orchestrator | 2025-11-08 14:15:33.705387 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-11-08 14:15:33.705396 | orchestrator | Saturday 08 November 2025 14:14:44 +0000 (0:00:11.323) 0:00:13.484 ***** 2025-11-08 14:15:33.705405 | orchestrator | changed: [localhost] 2025-11-08 14:15:33.705414 | orchestrator | 2025-11-08 14:15:33.705423 | orchestrator | TASK [Get volume type local] *************************************************** 2025-11-08 14:15:33.705432 | orchestrator | Saturday 08 November 2025 14:14:51 +0000 (0:00:06.655) 0:00:20.139 ***** 2025-11-08 14:15:33.705441 | orchestrator | ok: [localhost] 2025-11-08 14:15:33.705450 | orchestrator | 2025-11-08 14:15:33.705458 | orchestrator | TASK [Create volume type local] ************************************************ 2025-11-08 14:15:33.705467 | orchestrator | Saturday 08 November 2025 14:14:58 +0000 (0:00:07.568) 0:00:27.708 ***** 2025-11-08 14:15:33.705482 | orchestrator | changed: [localhost] 2025-11-08 14:15:33.705491 | orchestrator | 2025-11-08 14:15:33.705499 | orchestrator | TASK [Create public network] *************************************************** 2025-11-08 14:15:33.705508 | orchestrator | Saturday 08 November 2025 14:15:07 +0000 (0:00:08.608) 0:00:36.317 ***** 2025-11-08 14:15:33.705517 | orchestrator | changed: [localhost] 2025-11-08 14:15:33.705525 | orchestrator | 2025-11-08 14:15:33.705533 | orchestrator | TASK [Set public network to default] ******************************************* 2025-11-08 14:15:33.705542 | orchestrator | Saturday 08 November 2025 14:15:13 +0000 (0:00:05.628) 0:00:41.945 ***** 2025-11-08 14:15:33.705550 | orchestrator | changed: [localhost] 2025-11-08 14:15:33.705559 | orchestrator | 2025-11-08 14:15:33.705568 | orchestrator | TASK [Create public subnet] **************************************************** 2025-11-08 14:15:33.705588 | orchestrator | Saturday 08 November 2025 14:15:20 +0000 (0:00:07.485) 0:00:49.431 ***** 2025-11-08 14:15:33.705597 | orchestrator | changed: [localhost] 2025-11-08 14:15:33.705606 | orchestrator | 2025-11-08 14:15:33.705614 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-11-08 14:15:33.705623 | orchestrator | Saturday 08 November 2025 14:15:25 +0000 (0:00:04.709) 0:00:54.141 ***** 2025-11-08 14:15:33.705632 | orchestrator | changed: [localhost] 2025-11-08 14:15:33.705640 | orchestrator | 2025-11-08 14:15:33.705649 | orchestrator | TASK [Create manager role] ***************************************************** 2025-11-08 14:15:33.705657 | orchestrator | Saturday 08 November 2025 14:15:29 +0000 (0:00:04.134) 0:00:58.275 ***** 2025-11-08 14:15:33.705666 | orchestrator | ok: [localhost] 2025-11-08 14:15:33.705674 | orchestrator | 2025-11-08 14:15:33.705683 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:15:33.705692 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:15:33.705702 | orchestrator | 2025-11-08 14:15:33.705711 | orchestrator | 2025-11-08 14:15:33.705720 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:15:33.705752 | orchestrator | Saturday 08 November 2025 14:15:33 +0000 (0:00:03.810) 0:01:02.086 ***** 2025-11-08 14:15:33.705761 | orchestrator | =============================================================================== 2025-11-08 14:15:33.705769 | orchestrator | Get volume type LUKS --------------------------------------------------- 11.32s 2025-11-08 14:15:33.705778 | orchestrator | Create volume type local ------------------------------------------------ 8.61s 2025-11-08 14:15:33.705787 | orchestrator | Get volume type local --------------------------------------------------- 7.57s 2025-11-08 14:15:33.705795 | orchestrator | Set public network to default ------------------------------------------- 7.49s 2025-11-08 14:15:33.705804 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.66s 2025-11-08 14:15:33.705813 | orchestrator | Create public network --------------------------------------------------- 5.63s 2025-11-08 14:15:33.705821 | orchestrator | Create public subnet ---------------------------------------------------- 4.71s 2025-11-08 14:15:33.705830 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.13s 2025-11-08 14:15:33.705838 | orchestrator | Create manager role ----------------------------------------------------- 3.81s 2025-11-08 14:15:33.705847 | orchestrator | Gathering Facts --------------------------------------------------------- 2.09s 2025-11-08 14:15:36.466824 | orchestrator | 2025-11-08 14:15:36 | INFO  | It takes a moment until task f496aac7-21f9-4505-a250-bb2c901a8ee0 (image-manager) has been started and output is visible here. 2025-11-08 14:16:18.453733 | orchestrator | 2025-11-08 14:15:39 | INFO  | Processing image 'Cirros 0.6.2' 2025-11-08 14:16:18.453897 | orchestrator | 2025-11-08 14:15:39 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-11-08 14:16:18.453927 | orchestrator | 2025-11-08 14:15:39 | INFO  | Importing image Cirros 0.6.2 2025-11-08 14:16:18.453996 | orchestrator | 2025-11-08 14:15:39 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-11-08 14:16:18.454730 | orchestrator | 2025-11-08 14:15:41 | INFO  | Waiting for image to leave queued state... 2025-11-08 14:16:18.454757 | orchestrator | 2025-11-08 14:15:43 | INFO  | Waiting for import to complete... 2025-11-08 14:16:18.454775 | orchestrator | 2025-11-08 14:15:54 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-11-08 14:16:18.454793 | orchestrator | 2025-11-08 14:15:54 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-11-08 14:16:18.454810 | orchestrator | 2025-11-08 14:15:54 | INFO  | Setting internal_version = 0.6.2 2025-11-08 14:16:18.454828 | orchestrator | 2025-11-08 14:15:54 | INFO  | Setting image_original_user = cirros 2025-11-08 14:16:18.454845 | orchestrator | 2025-11-08 14:15:54 | INFO  | Adding tag os:cirros 2025-11-08 14:16:18.454863 | orchestrator | 2025-11-08 14:15:54 | INFO  | Setting property architecture: x86_64 2025-11-08 14:16:18.454881 | orchestrator | 2025-11-08 14:15:54 | INFO  | Setting property hw_disk_bus: scsi 2025-11-08 14:16:18.454898 | orchestrator | 2025-11-08 14:15:55 | INFO  | Setting property hw_rng_model: virtio 2025-11-08 14:16:18.454915 | orchestrator | 2025-11-08 14:15:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-08 14:16:18.454932 | orchestrator | 2025-11-08 14:15:55 | INFO  | Setting property hw_watchdog_action: reset 2025-11-08 14:16:18.454985 | orchestrator | 2025-11-08 14:15:55 | INFO  | Setting property hypervisor_type: qemu 2025-11-08 14:16:18.455003 | orchestrator | 2025-11-08 14:15:55 | INFO  | Setting property os_distro: cirros 2025-11-08 14:16:18.455021 | orchestrator | 2025-11-08 14:15:56 | INFO  | Setting property os_purpose: minimal 2025-11-08 14:16:18.455053 | orchestrator | 2025-11-08 14:15:56 | INFO  | Setting property replace_frequency: never 2025-11-08 14:16:18.455102 | orchestrator | 2025-11-08 14:15:56 | INFO  | Setting property uuid_validity: none 2025-11-08 14:16:18.455118 | orchestrator | 2025-11-08 14:15:56 | INFO  | Setting property provided_until: none 2025-11-08 14:16:18.455145 | orchestrator | 2025-11-08 14:15:57 | INFO  | Setting property image_description: Cirros 2025-11-08 14:16:18.455169 | orchestrator | 2025-11-08 14:15:57 | INFO  | Setting property image_name: Cirros 2025-11-08 14:16:18.455187 | orchestrator | 2025-11-08 14:15:57 | INFO  | Setting property internal_version: 0.6.2 2025-11-08 14:16:18.455204 | orchestrator | 2025-11-08 14:15:57 | INFO  | Setting property image_original_user: cirros 2025-11-08 14:16:18.455220 | orchestrator | 2025-11-08 14:15:57 | INFO  | Setting property os_version: 0.6.2 2025-11-08 14:16:18.455236 | orchestrator | 2025-11-08 14:15:58 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-11-08 14:16:18.455255 | orchestrator | 2025-11-08 14:15:58 | INFO  | Setting property image_build_date: 2023-05-30 2025-11-08 14:16:18.455271 | orchestrator | 2025-11-08 14:15:58 | INFO  | Checking status of 'Cirros 0.6.2' 2025-11-08 14:16:18.455300 | orchestrator | 2025-11-08 14:15:58 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-11-08 14:16:18.455315 | orchestrator | 2025-11-08 14:15:58 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-11-08 14:16:18.455331 | orchestrator | 2025-11-08 14:15:58 | INFO  | Processing image 'Cirros 0.6.3' 2025-11-08 14:16:18.455348 | orchestrator | 2025-11-08 14:15:58 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-11-08 14:16:18.455364 | orchestrator | 2025-11-08 14:15:58 | INFO  | Importing image Cirros 0.6.3 2025-11-08 14:16:18.455380 | orchestrator | 2025-11-08 14:15:58 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-11-08 14:16:18.455396 | orchestrator | 2025-11-08 14:16:00 | INFO  | Waiting for image to leave queued state... 2025-11-08 14:16:18.455413 | orchestrator | 2025-11-08 14:16:02 | INFO  | Waiting for import to complete... 2025-11-08 14:16:18.455452 | orchestrator | 2025-11-08 14:16:12 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-11-08 14:16:18.455469 | orchestrator | 2025-11-08 14:16:13 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-11-08 14:16:18.455485 | orchestrator | 2025-11-08 14:16:13 | INFO  | Setting internal_version = 0.6.3 2025-11-08 14:16:18.455498 | orchestrator | 2025-11-08 14:16:13 | INFO  | Setting image_original_user = cirros 2025-11-08 14:16:18.455508 | orchestrator | 2025-11-08 14:16:13 | INFO  | Adding tag os:cirros 2025-11-08 14:16:18.455518 | orchestrator | 2025-11-08 14:16:13 | INFO  | Setting property architecture: x86_64 2025-11-08 14:16:18.455527 | orchestrator | 2025-11-08 14:16:13 | INFO  | Setting property hw_disk_bus: scsi 2025-11-08 14:16:18.455537 | orchestrator | 2025-11-08 14:16:13 | INFO  | Setting property hw_rng_model: virtio 2025-11-08 14:16:18.455547 | orchestrator | 2025-11-08 14:16:14 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-08 14:16:18.455557 | orchestrator | 2025-11-08 14:16:14 | INFO  | Setting property hw_watchdog_action: reset 2025-11-08 14:16:18.455566 | orchestrator | 2025-11-08 14:16:14 | INFO  | Setting property hypervisor_type: qemu 2025-11-08 14:16:18.455576 | orchestrator | 2025-11-08 14:16:14 | INFO  | Setting property os_distro: cirros 2025-11-08 14:16:18.455595 | orchestrator | 2025-11-08 14:16:14 | INFO  | Setting property os_purpose: minimal 2025-11-08 14:16:18.455605 | orchestrator | 2025-11-08 14:16:15 | INFO  | Setting property replace_frequency: never 2025-11-08 14:16:18.455615 | orchestrator | 2025-11-08 14:16:15 | INFO  | Setting property uuid_validity: none 2025-11-08 14:16:18.455624 | orchestrator | 2025-11-08 14:16:15 | INFO  | Setting property provided_until: none 2025-11-08 14:16:18.455634 | orchestrator | 2025-11-08 14:16:15 | INFO  | Setting property image_description: Cirros 2025-11-08 14:16:18.455644 | orchestrator | 2025-11-08 14:16:16 | INFO  | Setting property image_name: Cirros 2025-11-08 14:16:18.455653 | orchestrator | 2025-11-08 14:16:16 | INFO  | Setting property internal_version: 0.6.3 2025-11-08 14:16:18.455663 | orchestrator | 2025-11-08 14:16:16 | INFO  | Setting property image_original_user: cirros 2025-11-08 14:16:18.455672 | orchestrator | 2025-11-08 14:16:16 | INFO  | Setting property os_version: 0.6.3 2025-11-08 14:16:18.455682 | orchestrator | 2025-11-08 14:16:16 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-11-08 14:16:18.455692 | orchestrator | 2025-11-08 14:16:17 | INFO  | Setting property image_build_date: 2024-09-26 2025-11-08 14:16:18.455707 | orchestrator | 2025-11-08 14:16:17 | INFO  | Checking status of 'Cirros 0.6.3' 2025-11-08 14:16:18.455717 | orchestrator | 2025-11-08 14:16:17 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-11-08 14:16:18.455727 | orchestrator | 2025-11-08 14:16:17 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-11-08 14:16:19.072126 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-11-08 14:16:21.881346 | orchestrator | 2025-11-08 14:16:21 | INFO  | date: 2025-11-08 2025-11-08 14:16:21.881446 | orchestrator | 2025-11-08 14:16:21 | INFO  | image: octavia-amphora-haproxy-2024.2.20251108.qcow2 2025-11-08 14:16:21.881462 | orchestrator | 2025-11-08 14:16:21 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251108.qcow2 2025-11-08 14:16:21.881488 | orchestrator | 2025-11-08 14:16:21 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251108.qcow2.CHECKSUM 2025-11-08 14:16:22.086265 | orchestrator | 2025-11-08 14:16:22 | INFO  | checksum: 3fc04b8bda05cee4d519be5cefbe648e9f95dc7de165b5defc4f24aaaa0625da 2025-11-08 14:16:22.175005 | orchestrator | 2025-11-08 14:16:22 | INFO  | It takes a moment until task 0dade861-0248-407e-90dd-2393103239c3 (image-manager) has been started and output is visible here. 2025-11-08 14:17:33.856342 | orchestrator | 2025-11-08 14:16:24 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-11-08' 2025-11-08 14:17:33.856485 | orchestrator | 2025-11-08 14:16:24 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251108.qcow2: 200 2025-11-08 14:17:33.856514 | orchestrator | 2025-11-08 14:16:24 | INFO  | Importing image OpenStack Octavia Amphora 2025-11-08 2025-11-08 14:17:33.856537 | orchestrator | 2025-11-08 14:16:24 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251108.qcow2 2025-11-08 14:17:33.856557 | orchestrator | 2025-11-08 14:16:25 | INFO  | Waiting for image to leave queued state... 2025-11-08 14:17:33.856576 | orchestrator | 2025-11-08 14:16:27 | INFO  | Waiting for import to complete... 2025-11-08 14:17:33.856591 | orchestrator | 2025-11-08 14:16:38 | INFO  | Waiting for import to complete... 2025-11-08 14:17:33.856629 | orchestrator | 2025-11-08 14:16:48 | INFO  | Waiting for import to complete... 2025-11-08 14:17:33.856640 | orchestrator | 2025-11-08 14:16:58 | INFO  | Waiting for import to complete... 2025-11-08 14:17:33.856651 | orchestrator | 2025-11-08 14:17:08 | INFO  | Waiting for import to complete... 2025-11-08 14:17:33.856662 | orchestrator | 2025-11-08 14:17:18 | INFO  | Waiting for import to complete... 2025-11-08 14:17:33.856673 | orchestrator | 2025-11-08 14:17:28 | INFO  | Import of 'OpenStack Octavia Amphora 2025-11-08' successfully completed, reloading images 2025-11-08 14:17:33.856685 | orchestrator | 2025-11-08 14:17:29 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-11-08' 2025-11-08 14:17:33.856696 | orchestrator | 2025-11-08 14:17:29 | INFO  | Setting internal_version = 2025-11-08 2025-11-08 14:17:33.856706 | orchestrator | 2025-11-08 14:17:29 | INFO  | Setting image_original_user = ubuntu 2025-11-08 14:17:33.856718 | orchestrator | 2025-11-08 14:17:29 | INFO  | Adding tag amphora 2025-11-08 14:17:33.856729 | orchestrator | 2025-11-08 14:17:29 | INFO  | Adding tag os:ubuntu 2025-11-08 14:17:33.856739 | orchestrator | 2025-11-08 14:17:29 | INFO  | Setting property architecture: x86_64 2025-11-08 14:17:33.856750 | orchestrator | 2025-11-08 14:17:29 | INFO  | Setting property hw_disk_bus: scsi 2025-11-08 14:17:33.856760 | orchestrator | 2025-11-08 14:17:29 | INFO  | Setting property hw_rng_model: virtio 2025-11-08 14:17:33.856771 | orchestrator | 2025-11-08 14:17:30 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-08 14:17:33.856782 | orchestrator | 2025-11-08 14:17:30 | INFO  | Setting property hw_watchdog_action: reset 2025-11-08 14:17:33.856793 | orchestrator | 2025-11-08 14:17:30 | INFO  | Setting property hypervisor_type: qemu 2025-11-08 14:17:33.856819 | orchestrator | 2025-11-08 14:17:30 | INFO  | Setting property os_distro: ubuntu 2025-11-08 14:17:33.856831 | orchestrator | 2025-11-08 14:17:30 | INFO  | Setting property replace_frequency: quarterly 2025-11-08 14:17:33.856843 | orchestrator | 2025-11-08 14:17:31 | INFO  | Setting property uuid_validity: last-1 2025-11-08 14:17:33.856855 | orchestrator | 2025-11-08 14:17:31 | INFO  | Setting property provided_until: none 2025-11-08 14:17:33.856867 | orchestrator | 2025-11-08 14:17:31 | INFO  | Setting property os_purpose: network 2025-11-08 14:17:33.856879 | orchestrator | 2025-11-08 14:17:31 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-11-08 14:17:33.856892 | orchestrator | 2025-11-08 14:17:31 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-11-08 14:17:33.856904 | orchestrator | 2025-11-08 14:17:32 | INFO  | Setting property internal_version: 2025-11-08 2025-11-08 14:17:33.856915 | orchestrator | 2025-11-08 14:17:32 | INFO  | Setting property image_original_user: ubuntu 2025-11-08 14:17:33.856928 | orchestrator | 2025-11-08 14:17:32 | INFO  | Setting property os_version: 2025-11-08 2025-11-08 14:17:33.856940 | orchestrator | 2025-11-08 14:17:32 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251108.qcow2 2025-11-08 14:17:33.856986 | orchestrator | 2025-11-08 14:17:33 | INFO  | Setting property image_build_date: 2025-11-08 2025-11-08 14:17:33.856998 | orchestrator | 2025-11-08 14:17:33 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-11-08' 2025-11-08 14:17:33.857010 | orchestrator | 2025-11-08 14:17:33 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-11-08' 2025-11-08 14:17:33.857041 | orchestrator | 2025-11-08 14:17:33 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-11-08 14:17:33.857062 | orchestrator | 2025-11-08 14:17:33 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-11-08 14:17:33.857076 | orchestrator | 2025-11-08 14:17:33 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-11-08 14:17:33.857088 | orchestrator | 2025-11-08 14:17:33 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-11-08 14:17:34.522458 | orchestrator | ok: Runtime: 0:03:31.599113 2025-11-08 14:17:34.552056 | 2025-11-08 14:17:34.552220 | TASK [Run checks] 2025-11-08 14:17:35.328464 | orchestrator | + set -e 2025-11-08 14:17:35.328657 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-08 14:17:35.328677 | orchestrator | ++ export INTERACTIVE=false 2025-11-08 14:17:35.328689 | orchestrator | ++ INTERACTIVE=false 2025-11-08 14:17:35.328699 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-08 14:17:35.328707 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-08 14:17:35.328716 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-11-08 14:17:35.329193 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-11-08 14:17:35.336032 | orchestrator | 2025-11-08 14:17:35.336132 | orchestrator | # CHECK 2025-11-08 14:17:35.336140 | orchestrator | 2025-11-08 14:17:35.336145 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-08 14:17:35.336154 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-08 14:17:35.336159 | orchestrator | + echo 2025-11-08 14:17:35.336163 | orchestrator | + echo '# CHECK' 2025-11-08 14:17:35.336167 | orchestrator | + echo 2025-11-08 14:17:35.336176 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-08 14:17:35.337099 | orchestrator | ++ semver latest 5.0.0 2025-11-08 14:17:35.390872 | orchestrator | 2025-11-08 14:17:35.390987 | orchestrator | ## Containers @ testbed-manager 2025-11-08 14:17:35.390999 | orchestrator | 2025-11-08 14:17:35.391009 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-08 14:17:35.391017 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-08 14:17:35.391024 | orchestrator | + echo 2025-11-08 14:17:35.391032 | orchestrator | + echo '## Containers @ testbed-manager' 2025-11-08 14:17:35.391039 | orchestrator | + echo 2025-11-08 14:17:35.391045 | orchestrator | + osism container testbed-manager ps 2025-11-08 14:17:37.987041 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-08 14:17:37.987172 | orchestrator | 0202981ee926 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_blackbox_exporter 2025-11-08 14:17:37.987197 | orchestrator | f2b3dccf9abc registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_alertmanager 2025-11-08 14:17:37.987210 | orchestrator | bc94d5f257db registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2025-11-08 14:17:37.987230 | orchestrator | 44bb4a1e8df3 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-11-08 14:17:37.987241 | orchestrator | ca9a63b9b1e5 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_server 2025-11-08 14:17:37.987258 | orchestrator | 753adcd5fe16 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 20 minutes ago Up 19 minutes cephclient 2025-11-08 14:17:37.987270 | orchestrator | a392a6edea7a registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2025-11-08 14:17:37.987282 | orchestrator | ea78a1ac2905 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-11-08 14:17:37.987293 | orchestrator | 34f8b2c48a83 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2025-11-08 14:17:37.987330 | orchestrator | 767411eb05fa phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 33 minutes ago Up 32 minutes (healthy) 80/tcp phpmyadmin 2025-11-08 14:17:37.987342 | orchestrator | e924ba757300 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 34 minutes ago Up 33 minutes openstackclient 2025-11-08 14:17:37.987353 | orchestrator | 6b8ab126cb49 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 34 minutes ago Up 33 minutes (healthy) 8080/tcp homer 2025-11-08 14:17:37.987364 | orchestrator | 2d2c06c87f5d registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 41 minutes ago Up 41 minutes (healthy) osismclient 2025-11-08 14:17:37.987376 | orchestrator | d6c35b417007 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 41 minutes ago Up 41 minutes (healthy) manager-openstack-1 2025-11-08 14:17:37.987387 | orchestrator | a97781b3f589 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 41 minutes ago Up 41 minutes (healthy) manager-listener-1 2025-11-08 14:17:37.987419 | orchestrator | 388a376d85af registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 41 minutes ago Up 41 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-11-08 14:17:37.987437 | orchestrator | 5cd40ecb2a6d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 41 minutes ago Up 41 minutes (healthy) manager-flower-1 2025-11-08 14:17:37.987449 | orchestrator | 9826e1f1cff9 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 41 minutes ago Up 41 minutes (healthy) manager-beat-1 2025-11-08 14:17:37.987460 | orchestrator | cead4d8b2941 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 41 minutes ago Up 41 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-11-08 14:17:37.987471 | orchestrator | f585423db5fe registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-11-08 14:17:37.987483 | orchestrator | aa506f7a7825 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 40 minutes (healthy) manager-inventory_reconciler-1 2025-11-08 14:17:37.987494 | orchestrator | 646e2e54f6c2 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) kolla-ansible 2025-11-08 14:17:37.987506 | orchestrator | bf95514f08cc registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-ansible 2025-11-08 14:17:37.987526 | orchestrator | 438d3238a42c registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-kubernetes 2025-11-08 14:17:37.987538 | orchestrator | 7844d0496015 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) ceph-ansible 2025-11-08 14:17:37.987549 | orchestrator | 0d3f03258a6c registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 41 minutes (healthy) 8000/tcp manager-ara-server-1 2025-11-08 14:17:37.987560 | orchestrator | 5f8a019f1e12 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 6379/tcp manager-redis-1 2025-11-08 14:17:37.987572 | orchestrator | b1a55d2ad33a registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 3306/tcp manager-mariadb-1 2025-11-08 14:17:37.987583 | orchestrator | 6e8519db5e46 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-11-08 14:17:38.571204 | orchestrator | 2025-11-08 14:17:38.571302 | orchestrator | ## Images @ testbed-manager 2025-11-08 14:17:38.571316 | orchestrator | 2025-11-08 14:17:38.571326 | orchestrator | + echo 2025-11-08 14:17:38.571335 | orchestrator | + echo '## Images @ testbed-manager' 2025-11-08 14:17:38.571345 | orchestrator | + echo 2025-11-08 14:17:38.571353 | orchestrator | + osism container testbed-manager images 2025-11-08 14:17:41.495513 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-08 14:17:41.495662 | orchestrator | registry.osism.tech/osism/osism latest 7077afd74ad5 About an hour ago 324MB 2025-11-08 14:17:41.495679 | orchestrator | registry.osism.tech/osism/osism-frontend latest 58c8f8dd21c1 About an hour ago 238MB 2025-11-08 14:17:41.495722 | orchestrator | registry.osism.tech/osism/osism 0f9e52378561 4 hours ago 324MB 2025-11-08 14:17:41.495742 | orchestrator | registry.osism.tech/osism/osism-frontend f482419dd97e 4 hours ago 238MB 2025-11-08 14:17:41.495759 | orchestrator | registry.osism.tech/osism/homer v25.10.1 702c480a75fa 11 hours ago 11.5MB 2025-11-08 14:17:41.495776 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 74b5dfd46a89 11 hours ago 236MB 2025-11-08 14:17:41.495792 | orchestrator | registry.osism.tech/osism/cephclient reef 8e5ccd78cd03 11 hours ago 453MB 2025-11-08 14:17:41.495809 | orchestrator | registry.osism.tech/kolla/cron 2024.2 a5addf1386a5 13 hours ago 267MB 2025-11-08 14:17:41.495825 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 a44bfe24c824 13 hours ago 580MB 2025-11-08 14:17:41.495842 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 bcba33f822b3 13 hours ago 671MB 2025-11-08 14:17:41.495859 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b854191d4a22 13 hours ago 358MB 2025-11-08 14:17:41.495876 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d7bbe866c842 13 hours ago 307MB 2025-11-08 14:17:41.495888 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 91f2007b64d7 13 hours ago 840MB 2025-11-08 14:17:41.495898 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 70b5c0147459 13 hours ago 405MB 2025-11-08 14:17:41.495927 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 3527ef3aeacb 13 hours ago 309MB 2025-11-08 14:17:41.495937 | orchestrator | registry.osism.tech/osism/osism-ansible latest f0dfc5507e5c 14 hours ago 597MB 2025-11-08 14:17:41.495972 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 e44d9a53e86e 14 hours ago 592MB 2025-11-08 14:17:41.495984 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 78c5e6d91a7d 14 hours ago 545MB 2025-11-08 14:17:41.495994 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest bee5568eadad 14 hours ago 1.21GB 2025-11-08 14:17:41.496003 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 333aa1bbd6fb 14 hours ago 315MB 2025-11-08 14:17:41.496013 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 weeks ago 742MB 2025-11-08 14:17:41.496022 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 2 months ago 275MB 2025-11-08 14:17:41.496032 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 ea44c9edeacf 3 months ago 329MB 2025-11-08 14:17:41.496041 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 3 months ago 226MB 2025-11-08 14:17:41.496051 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 4 months ago 41.4MB 2025-11-08 14:17:41.496060 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 17 months ago 146MB 2025-11-08 14:17:41.981345 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-08 14:17:41.981493 | orchestrator | ++ semver latest 5.0.0 2025-11-08 14:17:42.051721 | orchestrator | 2025-11-08 14:17:42.051836 | orchestrator | ## Containers @ testbed-node-0 2025-11-08 14:17:42.051853 | orchestrator | 2025-11-08 14:17:42.051865 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-08 14:17:42.051877 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-08 14:17:42.051889 | orchestrator | + echo 2025-11-08 14:17:42.051901 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-11-08 14:17:42.051913 | orchestrator | + echo 2025-11-08 14:17:42.051924 | orchestrator | + osism container testbed-node-0 ps 2025-11-08 14:17:44.687059 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-08 14:17:44.687189 | orchestrator | 86a890ce2810 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-11-08 14:17:44.687208 | orchestrator | f2a5a95fcd9e registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2025-11-08 14:17:44.687220 | orchestrator | 8db2f705c535 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-11-08 14:17:44.687231 | orchestrator | b03554c78055 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-11-08 14:17:44.687272 | orchestrator | 2ca5f34bda3c registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-11-08 14:17:44.687285 | orchestrator | bf4869e2c20b registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2025-11-08 14:17:44.687296 | orchestrator | 8824017ef5cd registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2025-11-08 14:17:44.687331 | orchestrator | e6cbb75c7f5e registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2025-11-08 14:17:44.687343 | orchestrator | a0a3608a35f7 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-11-08 14:17:44.687354 | orchestrator | 8c264fe798cf registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2025-11-08 14:17:44.687365 | orchestrator | 6f750b942168 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-11-08 14:17:44.687375 | orchestrator | 6d5ee03639a7 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2025-11-08 14:17:44.687386 | orchestrator | 97f8c32da882 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2025-11-08 14:17:44.687397 | orchestrator | 05b25b5d909c registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-11-08 14:17:44.687409 | orchestrator | a76fcf0e0cba registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-11-08 14:17:44.687419 | orchestrator | eedca36be7e2 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-11-08 14:17:44.687430 | orchestrator | 38edaa9a4026 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) neutron_server 2025-11-08 14:17:44.687441 | orchestrator | d41c34a1dd94 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-11-08 14:17:44.687452 | orchestrator | 92dfed943be2 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-11-08 14:17:44.687464 | orchestrator | 090d2d27cd3c registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-11-08 14:17:44.687476 | orchestrator | c88c3a26649b registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_cadvisor 2025-11-08 14:17:44.687508 | orchestrator | 8eea3b423141 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2025-11-08 14:17:44.687520 | orchestrator | d0084204d9a5 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2025-11-08 14:17:44.687531 | orchestrator | 143aeb68e1ab registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2025-11-08 14:17:44.687554 | orchestrator | 80982b4b90bf registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2025-11-08 14:17:44.687566 | orchestrator | 81fa091b8f05 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2025-11-08 14:17:44.687577 | orchestrator | 6baa3f268bf5 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-11-08 14:17:44.687595 | orchestrator | f980e29dfa75 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2025-11-08 14:17:44.687606 | orchestrator | 37c3869f8b02 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-11-08 14:17:44.687617 | orchestrator | 1c2f4bb63025 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-11-08 14:17:44.687627 | orchestrator | 238f316e925c registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2025-11-08 14:17:44.687638 | orchestrator | 290b5ac002d4 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-0 2025-11-08 14:17:44.687649 | orchestrator | 0827390bfe9b registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2025-11-08 14:17:44.687660 | orchestrator | 06e3f2855aaf registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-11-08 14:17:44.687671 | orchestrator | 1147a6d5c8c9 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2025-11-08 14:17:44.687681 | orchestrator | baa334444d34 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) horizon 2025-11-08 14:17:44.687692 | orchestrator | 4a0d4f21f2fc registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-11-08 14:17:44.687703 | orchestrator | 854509c7b402 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch_dashboards 2025-11-08 14:17:44.687714 | orchestrator | c5f920d71ae4 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch 2025-11-08 14:17:44.687725 | orchestrator | 2c300c472a78 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-0 2025-11-08 14:17:44.687736 | orchestrator | 45cbc833ebf2 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2025-11-08 14:17:44.687747 | orchestrator | 13724e9d297a registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2025-11-08 14:17:44.687758 | orchestrator | ee341237474e registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2025-11-08 14:17:44.687768 | orchestrator | 4226cf569a12 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_northd 2025-11-08 14:17:44.687795 | orchestrator | 7e071d2d8877 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2025-11-08 14:17:44.687807 | orchestrator | c3d50fa61d52 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2025-11-08 14:17:44.687818 | orchestrator | 0c4807ab1731 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-0 2025-11-08 14:17:44.687836 | orchestrator | ac01c8f9a5cc registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2025-11-08 14:17:44.687846 | orchestrator | c968353fc0f8 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) rabbitmq 2025-11-08 14:17:44.687857 | orchestrator | f3059c98f9df registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2025-11-08 14:17:44.687874 | orchestrator | 139a5575353f registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-11-08 14:17:44.687885 | orchestrator | b5a53c456c6b registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) redis_sentinel 2025-11-08 14:17:44.687896 | orchestrator | 9416daab30dc registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2025-11-08 14:17:44.687907 | orchestrator | da4d69e320d3 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2025-11-08 14:17:44.687918 | orchestrator | 798bc9223e91 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2025-11-08 14:17:44.687928 | orchestrator | 0c8ac4205516 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2025-11-08 14:17:44.687939 | orchestrator | 0e4076e52ce4 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2025-11-08 14:17:45.085853 | orchestrator | 2025-11-08 14:17:45.086077 | orchestrator | ## Images @ testbed-node-0 2025-11-08 14:17:45.086097 | orchestrator | 2025-11-08 14:17:45.086109 | orchestrator | + echo 2025-11-08 14:17:45.086121 | orchestrator | + echo '## Images @ testbed-node-0' 2025-11-08 14:17:45.086134 | orchestrator | + echo 2025-11-08 14:17:45.086145 | orchestrator | + osism container testbed-node-0 images 2025-11-08 14:17:47.832136 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-08 14:17:47.832257 | orchestrator | registry.osism.tech/osism/ceph-daemon reef d96687d69e87 11 hours ago 1.27GB 2025-11-08 14:17:47.832270 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 4d3cf5c070cb 13 hours ago 394MB 2025-11-08 14:17:47.832280 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 7ac963e25265 13 hours ago 275MB 2025-11-08 14:17:47.832288 | orchestrator | registry.osism.tech/kolla/cron 2024.2 a5addf1386a5 13 hours ago 267MB 2025-11-08 14:17:47.832296 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 b12588b01d79 13 hours ago 278MB 2025-11-08 14:17:47.832304 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f589cf104942 13 hours ago 324MB 2025-11-08 14:17:47.832312 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 036ba53aad32 13 hours ago 267MB 2025-11-08 14:17:47.832321 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 a44bfe24c824 13 hours ago 580MB 2025-11-08 14:17:47.832329 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 89acee0145af 13 hours ago 1GB 2025-11-08 14:17:47.832337 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 a89ff0a805ac 13 hours ago 1.56GB 2025-11-08 14:17:47.832345 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 18af5ecf0acb 13 hours ago 1.53GB 2025-11-08 14:17:47.832373 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 bcba33f822b3 13 hours ago 671MB 2025-11-08 14:17:47.832397 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 b77069354a5a 13 hours ago 280MB 2025-11-08 14:17:47.832405 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 69f2f3baad8e 13 hours ago 280MB 2025-11-08 14:17:47.832413 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 6b5a7c7534c0 13 hours ago 1.15GB 2025-11-08 14:17:47.832421 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 681c8a2d8df5 13 hours ago 453MB 2025-11-08 14:17:47.832428 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b854191d4a22 13 hours ago 358MB 2025-11-08 14:17:47.832436 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 25fe82386291 13 hours ago 300MB 2025-11-08 14:17:47.832444 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 e05b1d22cc91 13 hours ago 302MB 2025-11-08 14:17:47.832452 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d7bbe866c842 13 hours ago 307MB 2025-11-08 14:17:47.832460 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 0c0ba8931658 13 hours ago 293MB 2025-11-08 14:17:47.832467 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 734e432a7ecc 13 hours ago 274MB 2025-11-08 14:17:47.832475 | orchestrator | registry.osism.tech/kolla/redis 2024.2 5a70656ce471 13 hours ago 274MB 2025-11-08 14:17:47.832483 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 bb5b64110bc3 13 hours ago 841MB 2025-11-08 14:17:47.832491 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b2c492d59340 13 hours ago 841MB 2025-11-08 14:17:47.832498 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 f6b753dc2be0 13 hours ago 841MB 2025-11-08 14:17:47.832506 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7ac897f6daa9 13 hours ago 841MB 2025-11-08 14:17:47.832514 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 2d7fbd74bb49 13 hours ago 991MB 2025-11-08 14:17:47.832522 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 cff0d95945b4 13 hours ago 1.05GB 2025-11-08 14:17:47.832529 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c96ff65d6f0c 13 hours ago 977MB 2025-11-08 14:17:47.832537 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 2242235e1942 13 hours ago 1.1GB 2025-11-08 14:17:47.832545 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 20283e402749 13 hours ago 1.24GB 2025-11-08 14:17:47.832553 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 ad747ec6bd84 13 hours ago 1.13GB 2025-11-08 14:17:47.832560 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 36cecefde034 13 hours ago 1.21GB 2025-11-08 14:17:47.832568 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 83711aef1d58 13 hours ago 1.37GB 2025-11-08 14:17:47.832576 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 c355786586de 13 hours ago 1.21GB 2025-11-08 14:17:47.832599 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7d2196dc8ff1 13 hours ago 1.21GB 2025-11-08 14:17:47.832607 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 224aedbd5d92 13 hours ago 1.4GB 2025-11-08 14:17:47.832615 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0872eb25ee3e 13 hours ago 1.4GB 2025-11-08 14:17:47.832623 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 ea00a65dc423 13 hours ago 975MB 2025-11-08 14:17:47.832636 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 187f60b705c2 13 hours ago 974MB 2025-11-08 14:17:47.832650 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 929c3dbfc660 13 hours ago 975MB 2025-11-08 14:17:47.832658 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 0093c90ce85d 13 hours ago 975MB 2025-11-08 14:17:47.832666 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 c0e768a4fcf1 13 hours ago 990MB 2025-11-08 14:17:47.832673 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 77045ffa4a6b 13 hours ago 990MB 2025-11-08 14:17:47.832681 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 ea0adbf4905d 13 hours ago 985MB 2025-11-08 14:17:47.832689 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 b61ef62e78e4 13 hours ago 986MB 2025-11-08 14:17:47.832696 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 60872c06497b 13 hours ago 986MB 2025-11-08 14:17:47.832704 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 9421824aceea 13 hours ago 986MB 2025-11-08 14:17:47.832712 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 4998386cc39c 13 hours ago 1.16GB 2025-11-08 14:17:47.832720 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 b0e6bb572113 13 hours ago 978MB 2025-11-08 14:17:47.832727 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 89ac8e72708e 13 hours ago 977MB 2025-11-08 14:17:47.832842 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 73433fd62cdc 13 hours ago 992MB 2025-11-08 14:17:47.832855 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 6b6aeb3f0d54 13 hours ago 991MB 2025-11-08 14:17:47.832874 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 1d9353637a46 13 hours ago 992MB 2025-11-08 14:17:47.832882 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 7fcfdc0cece5 13 hours ago 1.05GB 2025-11-08 14:17:47.832890 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 38ece7c6fc9a 13 hours ago 1.03GB 2025-11-08 14:17:47.832898 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 d002cac3f007 13 hours ago 1.03GB 2025-11-08 14:17:47.832906 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 0c77aa017cf5 13 hours ago 1.03GB 2025-11-08 14:17:47.832914 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 45138f6539d1 13 hours ago 1.05GB 2025-11-08 14:17:47.832922 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 11dcd0d2e656 13 hours ago 1.04GB 2025-11-08 14:17:47.832930 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 99b68c7e04e3 13 hours ago 1.04GB 2025-11-08 14:17:47.832938 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 556b766e9cb0 13 hours ago 1.09GB 2025-11-08 14:17:48.391022 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-08 14:17:48.391346 | orchestrator | ++ semver latest 5.0.0 2025-11-08 14:17:48.460294 | orchestrator | 2025-11-08 14:17:48.460415 | orchestrator | ## Containers @ testbed-node-1 2025-11-08 14:17:48.460442 | orchestrator | 2025-11-08 14:17:48.460461 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-08 14:17:48.460481 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-08 14:17:48.460499 | orchestrator | + echo 2025-11-08 14:17:48.460519 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-11-08 14:17:48.460538 | orchestrator | + echo 2025-11-08 14:17:48.460556 | orchestrator | + osism container testbed-node-1 ps 2025-11-08 14:17:51.060473 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-08 14:17:51.060586 | orchestrator | fab39ffa0eea registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_worker 2025-11-08 14:17:51.060630 | orchestrator | c1c6d9a211c5 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2025-11-08 14:17:51.060659 | orchestrator | 86f1f5d620b1 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-11-08 14:17:51.060671 | orchestrator | df9135a81fa9 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-11-08 14:17:51.060683 | orchestrator | 7c7d5d97c65e registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-11-08 14:17:51.060694 | orchestrator | 5a9ee25d902c registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2025-11-08 14:17:51.060705 | orchestrator | 84d58a9d707b registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2025-11-08 14:17:51.060716 | orchestrator | dbd85bc5ca5b registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-11-08 14:17:51.060727 | orchestrator | e185054981ea registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2025-11-08 14:17:51.060738 | orchestrator | f344f9e1e836 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-11-08 14:17:51.060749 | orchestrator | 55ddd4533406 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-11-08 14:17:51.060765 | orchestrator | 528b6f474674 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2025-11-08 14:17:51.060776 | orchestrator | d07929165202 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2025-11-08 14:17:51.060787 | orchestrator | 267a48818375 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-11-08 14:17:51.060798 | orchestrator | bdfc70390cd4 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-11-08 14:17:51.060809 | orchestrator | 7e8abbbcaf64 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-11-08 14:17:51.060820 | orchestrator | aa34ee6fb4e4 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-11-08 14:17:51.060831 | orchestrator | 42936b1674dd registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-11-08 14:17:51.060842 | orchestrator | 74f6e5362da2 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-11-08 14:17:51.060853 | orchestrator | fc5a38b5c5d3 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2025-11-08 14:17:51.060864 | orchestrator | 0d26a0a8a081 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_cadvisor 2025-11-08 14:17:51.060900 | orchestrator | 299a2f00ea99 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2025-11-08 14:17:51.060912 | orchestrator | 3fe2cf4fca0f registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2025-11-08 14:17:51.060928 | orchestrator | ebddf1adf179 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2025-11-08 14:17:51.060939 | orchestrator | 8e0700ab3b8a registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2025-11-08 14:17:51.061051 | orchestrator | a4a25261a8f0 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2025-11-08 14:17:51.061066 | orchestrator | 89fd7dc40a08 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-11-08 14:17:51.061079 | orchestrator | 2f8df05819db registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2025-11-08 14:17:51.061092 | orchestrator | 660a614650f6 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-11-08 14:17:51.061104 | orchestrator | 46b02569bb1e registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-11-08 14:17:51.061116 | orchestrator | 89b23e8d335f registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) barbican_api 2025-11-08 14:17:51.061176 | orchestrator | 5c826e890fd6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-1 2025-11-08 14:17:51.061189 | orchestrator | 6795a5873043 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2025-11-08 14:17:51.061202 | orchestrator | 420a3d82aed9 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-11-08 14:17:51.061215 | orchestrator | 63c2ebcfc531 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-11-08 14:17:51.061228 | orchestrator | b57692371700 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2025-11-08 14:17:51.061240 | orchestrator | 9c838f225c95 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-11-08 14:17:51.061253 | orchestrator | fbea2a540126 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 24 minutes ago Up 24 minutes (healthy) mariadb 2025-11-08 14:17:51.061266 | orchestrator | b7e26f1b1aed registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2025-11-08 14:17:51.061388 | orchestrator | 8992349a015c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-1 2025-11-08 14:17:51.061413 | orchestrator | ab1605719a94 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2025-11-08 14:17:51.061424 | orchestrator | ffeaa8f0775e registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2025-11-08 14:17:51.061435 | orchestrator | 11426e7a7267 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2025-11-08 14:17:51.061446 | orchestrator | c39224aa61d2 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_northd 2025-11-08 14:17:51.061457 | orchestrator | 9e982c15e262 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_sb_db 2025-11-08 14:17:51.061468 | orchestrator | c2dfdaa4d0d3 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2025-11-08 14:17:51.061478 | orchestrator | d7a8bbb89595 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2025-11-08 14:17:51.061496 | orchestrator | 1ce83fb51fca registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) rabbitmq 2025-11-08 14:17:51.061507 | orchestrator | 91989c7afd33 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-1 2025-11-08 14:17:51.061518 | orchestrator | 393b92501db4 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2025-11-08 14:17:51.061529 | orchestrator | e2a450b6567a registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2025-11-08 14:17:51.061539 | orchestrator | 31778047c236 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2025-11-08 14:17:51.061550 | orchestrator | aed35aee13c1 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2025-11-08 14:17:51.061561 | orchestrator | 7dd922b3f786 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2025-11-08 14:17:51.061572 | orchestrator | 6c0b865137aa registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2025-11-08 14:17:51.061583 | orchestrator | f5d5b809041a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-11-08 14:17:51.061594 | orchestrator | 79962d11d622 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2025-11-08 14:17:51.402773 | orchestrator | 2025-11-08 14:17:51.402907 | orchestrator | ## Images @ testbed-node-1 2025-11-08 14:17:51.402921 | orchestrator | 2025-11-08 14:17:51.402931 | orchestrator | + echo 2025-11-08 14:17:51.402941 | orchestrator | + echo '## Images @ testbed-node-1' 2025-11-08 14:17:51.402999 | orchestrator | + echo 2025-11-08 14:17:51.403009 | orchestrator | + osism container testbed-node-1 images 2025-11-08 14:17:53.983147 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-08 14:17:53.983254 | orchestrator | registry.osism.tech/osism/ceph-daemon reef d96687d69e87 11 hours ago 1.27GB 2025-11-08 14:17:53.983294 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 4d3cf5c070cb 13 hours ago 394MB 2025-11-08 14:17:53.983314 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 7ac963e25265 13 hours ago 275MB 2025-11-08 14:17:53.983342 | orchestrator | registry.osism.tech/kolla/cron 2024.2 a5addf1386a5 13 hours ago 267MB 2025-11-08 14:17:53.983362 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 b12588b01d79 13 hours ago 278MB 2025-11-08 14:17:53.983379 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f589cf104942 13 hours ago 324MB 2025-11-08 14:17:53.983396 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 036ba53aad32 13 hours ago 267MB 2025-11-08 14:17:53.983413 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 a44bfe24c824 13 hours ago 580MB 2025-11-08 14:17:53.983430 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 89acee0145af 13 hours ago 1GB 2025-11-08 14:17:53.983448 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 18af5ecf0acb 13 hours ago 1.53GB 2025-11-08 14:17:53.983466 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 a89ff0a805ac 13 hours ago 1.56GB 2025-11-08 14:17:53.983484 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 bcba33f822b3 13 hours ago 671MB 2025-11-08 14:17:53.983503 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 b77069354a5a 13 hours ago 280MB 2025-11-08 14:17:53.983521 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 69f2f3baad8e 13 hours ago 280MB 2025-11-08 14:17:53.983539 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 6b5a7c7534c0 13 hours ago 1.15GB 2025-11-08 14:17:53.983558 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 681c8a2d8df5 13 hours ago 453MB 2025-11-08 14:17:53.983575 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b854191d4a22 13 hours ago 358MB 2025-11-08 14:17:53.983593 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 25fe82386291 13 hours ago 300MB 2025-11-08 14:17:53.983612 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 e05b1d22cc91 13 hours ago 302MB 2025-11-08 14:17:53.983629 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d7bbe866c842 13 hours ago 307MB 2025-11-08 14:17:53.983648 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 0c0ba8931658 13 hours ago 293MB 2025-11-08 14:17:53.983668 | orchestrator | registry.osism.tech/kolla/redis 2024.2 5a70656ce471 13 hours ago 274MB 2025-11-08 14:17:53.983687 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 734e432a7ecc 13 hours ago 274MB 2025-11-08 14:17:53.983706 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 bb5b64110bc3 13 hours ago 841MB 2025-11-08 14:17:53.983721 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b2c492d59340 13 hours ago 841MB 2025-11-08 14:17:53.983734 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 f6b753dc2be0 13 hours ago 841MB 2025-11-08 14:17:53.983746 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7ac897f6daa9 13 hours ago 841MB 2025-11-08 14:17:53.983777 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c96ff65d6f0c 13 hours ago 977MB 2025-11-08 14:17:53.983790 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 2242235e1942 13 hours ago 1.1GB 2025-11-08 14:17:53.983803 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 20283e402749 13 hours ago 1.24GB 2025-11-08 14:17:53.983815 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 ad747ec6bd84 13 hours ago 1.13GB 2025-11-08 14:17:53.983838 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 36cecefde034 13 hours ago 1.21GB 2025-11-08 14:17:53.983850 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 83711aef1d58 13 hours ago 1.37GB 2025-11-08 14:17:53.983862 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 c355786586de 13 hours ago 1.21GB 2025-11-08 14:17:53.983875 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7d2196dc8ff1 13 hours ago 1.21GB 2025-11-08 14:17:53.983888 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 224aedbd5d92 13 hours ago 1.4GB 2025-11-08 14:17:53.983921 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0872eb25ee3e 13 hours ago 1.4GB 2025-11-08 14:17:53.983934 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 c0e768a4fcf1 13 hours ago 990MB 2025-11-08 14:17:53.983976 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 77045ffa4a6b 13 hours ago 990MB 2025-11-08 14:17:53.983996 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 ea0adbf4905d 13 hours ago 985MB 2025-11-08 14:17:53.984014 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 b61ef62e78e4 13 hours ago 986MB 2025-11-08 14:17:53.984031 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 60872c06497b 13 hours ago 986MB 2025-11-08 14:17:53.984050 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 9421824aceea 13 hours ago 986MB 2025-11-08 14:17:53.984068 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 4998386cc39c 13 hours ago 1.16GB 2025-11-08 14:17:53.984087 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 73433fd62cdc 13 hours ago 992MB 2025-11-08 14:17:53.984105 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 6b6aeb3f0d54 13 hours ago 991MB 2025-11-08 14:17:53.984121 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 1d9353637a46 13 hours ago 992MB 2025-11-08 14:17:53.984139 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 7fcfdc0cece5 13 hours ago 1.05GB 2025-11-08 14:17:53.984158 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 38ece7c6fc9a 13 hours ago 1.03GB 2025-11-08 14:17:53.984170 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 d002cac3f007 13 hours ago 1.03GB 2025-11-08 14:17:53.984180 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 0c77aa017cf5 13 hours ago 1.03GB 2025-11-08 14:17:53.984191 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 45138f6539d1 13 hours ago 1.05GB 2025-11-08 14:17:53.984201 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 11dcd0d2e656 13 hours ago 1.04GB 2025-11-08 14:17:53.984212 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 99b68c7e04e3 13 hours ago 1.04GB 2025-11-08 14:17:53.984223 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 556b766e9cb0 13 hours ago 1.09GB 2025-11-08 14:17:54.382143 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-08 14:17:54.382773 | orchestrator | ++ semver latest 5.0.0 2025-11-08 14:17:54.441208 | orchestrator | 2025-11-08 14:17:54.441321 | orchestrator | ## Containers @ testbed-node-2 2025-11-08 14:17:54.441335 | orchestrator | 2025-11-08 14:17:54.441347 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-08 14:17:54.441357 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-08 14:17:54.441367 | orchestrator | + echo 2025-11-08 14:17:54.441378 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-11-08 14:17:54.441388 | orchestrator | + echo 2025-11-08 14:17:54.441398 | orchestrator | + osism container testbed-node-2 ps 2025-11-08 14:17:57.065383 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-08 14:17:57.065533 | orchestrator | 35a6c8e1a78f registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2025-11-08 14:17:57.065554 | orchestrator | 1b42115fe4e0 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2025-11-08 14:17:57.065566 | orchestrator | 66b021a3f352 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-11-08 14:17:57.065578 | orchestrator | 1b4a1e6eb40d registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-11-08 14:17:57.065590 | orchestrator | 2489936cd31d registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-11-08 14:17:57.065601 | orchestrator | aa303e417934 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2025-11-08 14:17:57.066722 | orchestrator | 5985fb822f91 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2025-11-08 14:17:57.066770 | orchestrator | 0a2ea9af2cfe registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-11-08 14:17:57.066779 | orchestrator | 3d66257a27fe registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2025-11-08 14:17:57.066787 | orchestrator | 365e55f7c3fd registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2025-11-08 14:17:57.066795 | orchestrator | 5fd5816a2ac8 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-11-08 14:17:57.066819 | orchestrator | b3228e750210 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2025-11-08 14:17:57.066827 | orchestrator | 266e80b5a8ce registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2025-11-08 14:17:57.066834 | orchestrator | 20573b864728 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-11-08 14:17:57.066842 | orchestrator | 7b7cc0b5f59a registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-11-08 14:17:57.066849 | orchestrator | 5d91b3bbc34f registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-11-08 14:17:57.066856 | orchestrator | 985fa6257f75 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) neutron_server 2025-11-08 14:17:57.066864 | orchestrator | 1ffd838437c8 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2025-11-08 14:17:57.066871 | orchestrator | 0468b7ab12d7 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_elasticsearch_exporter 2025-11-08 14:17:57.066879 | orchestrator | 161c756dd19a registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_mdns 2025-11-08 14:17:57.066904 | orchestrator | ad59035c3bf1 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2025-11-08 14:17:57.066912 | orchestrator | f24a350d33c4 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_cadvisor 2025-11-08 14:17:57.066919 | orchestrator | d44b4b3bfe85 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2025-11-08 14:17:57.066927 | orchestrator | 39925c502dd3 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2025-11-08 14:17:57.066934 | orchestrator | b1c1d11df613 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2025-11-08 14:17:57.066942 | orchestrator | 0e9c92680d9c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_mysqld_exporter 2025-11-08 14:17:57.066991 | orchestrator | e270b6b1f3f4 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2025-11-08 14:17:57.067000 | orchestrator | 6c4190f066ba registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_node_exporter 2025-11-08 14:17:57.067028 | orchestrator | d5dafa1a8ff2 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2025-11-08 14:17:57.067041 | orchestrator | 3bf5fdd0799b registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2025-11-08 14:17:57.067053 | orchestrator | 50a4790b6169 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_api 2025-11-08 14:17:57.067113 | orchestrator | b1d1dee8b68b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 18 minutes ago Up 18 minutes ceph-mgr-testbed-node-2 2025-11-08 14:17:57.067128 | orchestrator | 9aff58d203fa registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2025-11-08 14:17:57.067140 | orchestrator | ff2c6b24fd3a registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-11-08 14:17:57.067153 | orchestrator | 584c4342d3c5 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 21 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-11-08 14:17:57.067166 | orchestrator | ec6b93ffcd03 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone_ssh 2025-11-08 14:17:57.067177 | orchestrator | 53c87efba8b7 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-11-08 14:17:57.067189 | orchestrator | 941b52bb77fd registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2025-11-08 14:17:57.067202 | orchestrator | cf42e79cf9ad registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2025-11-08 14:17:57.067227 | orchestrator | 8c91d29cae84 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-2 2025-11-08 14:17:57.067239 | orchestrator | f4add02af69a registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2025-11-08 14:17:57.067251 | orchestrator | ec0592e606d4 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) proxysql 2025-11-08 14:17:57.067263 | orchestrator | c4526daf7156 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2025-11-08 14:17:57.067275 | orchestrator | 300b5bae2450 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes ovn_northd 2025-11-08 14:17:57.067287 | orchestrator | b9231cab6606 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_sb_db 2025-11-08 14:17:57.067306 | orchestrator | 1eb9356da23f registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_nb_db 2025-11-08 14:17:57.067318 | orchestrator | 8ed4cfb09f8b registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-11-08 14:17:57.067330 | orchestrator | b3ddabae68c4 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2025-11-08 14:17:57.067342 | orchestrator | bf21459e2a96 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-2 2025-11-08 14:17:57.067354 | orchestrator | f2957fadca76 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2025-11-08 14:17:57.067367 | orchestrator | 950a21e6bba6 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) openvswitch_db 2025-11-08 14:17:57.067392 | orchestrator | c950185d1436 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2025-11-08 14:17:57.067410 | orchestrator | ff4e863ec15e registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2025-11-08 14:17:57.067517 | orchestrator | c12088038fdd registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2025-11-08 14:17:57.067536 | orchestrator | 667af73fb9ef registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2025-11-08 14:17:57.067548 | orchestrator | 337205edf3d1 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2025-11-08 14:17:57.067561 | orchestrator | ca912e466097 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2025-11-08 14:17:57.409693 | orchestrator | 2025-11-08 14:17:57.409801 | orchestrator | ## Images @ testbed-node-2 2025-11-08 14:17:57.409820 | orchestrator | 2025-11-08 14:17:57.409833 | orchestrator | + echo 2025-11-08 14:17:57.409845 | orchestrator | + echo '## Images @ testbed-node-2' 2025-11-08 14:17:57.409878 | orchestrator | + echo 2025-11-08 14:17:57.409900 | orchestrator | + osism container testbed-node-2 images 2025-11-08 14:17:59.968890 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-08 14:17:59.969019 | orchestrator | registry.osism.tech/osism/ceph-daemon reef d96687d69e87 11 hours ago 1.27GB 2025-11-08 14:17:59.969026 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 4d3cf5c070cb 13 hours ago 394MB 2025-11-08 14:17:59.969030 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 7ac963e25265 13 hours ago 275MB 2025-11-08 14:17:59.969034 | orchestrator | registry.osism.tech/kolla/cron 2024.2 a5addf1386a5 13 hours ago 267MB 2025-11-08 14:17:59.969038 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 b12588b01d79 13 hours ago 278MB 2025-11-08 14:17:59.969042 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 036ba53aad32 13 hours ago 267MB 2025-11-08 14:17:59.969046 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 f589cf104942 13 hours ago 324MB 2025-11-08 14:17:59.969049 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 a44bfe24c824 13 hours ago 580MB 2025-11-08 14:17:59.969053 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 89acee0145af 13 hours ago 1GB 2025-11-08 14:17:59.969057 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 a89ff0a805ac 13 hours ago 1.56GB 2025-11-08 14:17:59.969061 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 18af5ecf0acb 13 hours ago 1.53GB 2025-11-08 14:17:59.969065 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 bcba33f822b3 13 hours ago 671MB 2025-11-08 14:17:59.969068 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 b77069354a5a 13 hours ago 280MB 2025-11-08 14:17:59.969072 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 69f2f3baad8e 13 hours ago 280MB 2025-11-08 14:17:59.969076 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 6b5a7c7534c0 13 hours ago 1.15GB 2025-11-08 14:17:59.969080 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 681c8a2d8df5 13 hours ago 453MB 2025-11-08 14:17:59.969084 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b854191d4a22 13 hours ago 358MB 2025-11-08 14:17:59.969088 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 25fe82386291 13 hours ago 300MB 2025-11-08 14:17:59.969092 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 e05b1d22cc91 13 hours ago 302MB 2025-11-08 14:17:59.969096 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d7bbe866c842 13 hours ago 307MB 2025-11-08 14:17:59.969099 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 0c0ba8931658 13 hours ago 293MB 2025-11-08 14:17:59.969103 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 734e432a7ecc 13 hours ago 274MB 2025-11-08 14:17:59.969107 | orchestrator | registry.osism.tech/kolla/redis 2024.2 5a70656ce471 13 hours ago 274MB 2025-11-08 14:17:59.969110 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 bb5b64110bc3 13 hours ago 841MB 2025-11-08 14:17:59.969114 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b2c492d59340 13 hours ago 841MB 2025-11-08 14:17:59.969118 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 f6b753dc2be0 13 hours ago 841MB 2025-11-08 14:17:59.969121 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7ac897f6daa9 13 hours ago 841MB 2025-11-08 14:17:59.969125 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c96ff65d6f0c 13 hours ago 977MB 2025-11-08 14:17:59.969129 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 2242235e1942 13 hours ago 1.1GB 2025-11-08 14:17:59.969133 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 20283e402749 13 hours ago 1.24GB 2025-11-08 14:17:59.969140 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 ad747ec6bd84 13 hours ago 1.13GB 2025-11-08 14:17:59.969144 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 36cecefde034 13 hours ago 1.21GB 2025-11-08 14:17:59.969148 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 83711aef1d58 13 hours ago 1.37GB 2025-11-08 14:17:59.969152 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 c355786586de 13 hours ago 1.21GB 2025-11-08 14:17:59.969155 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7d2196dc8ff1 13 hours ago 1.21GB 2025-11-08 14:17:59.969159 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 224aedbd5d92 13 hours ago 1.4GB 2025-11-08 14:17:59.969173 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0872eb25ee3e 13 hours ago 1.4GB 2025-11-08 14:17:59.969177 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 c0e768a4fcf1 13 hours ago 990MB 2025-11-08 14:17:59.969181 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 77045ffa4a6b 13 hours ago 990MB 2025-11-08 14:17:59.969185 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 ea0adbf4905d 13 hours ago 985MB 2025-11-08 14:17:59.969188 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 b61ef62e78e4 13 hours ago 986MB 2025-11-08 14:17:59.969192 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 60872c06497b 13 hours ago 986MB 2025-11-08 14:17:59.969196 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 9421824aceea 13 hours ago 986MB 2025-11-08 14:17:59.969211 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 4998386cc39c 13 hours ago 1.16GB 2025-11-08 14:17:59.969215 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 73433fd62cdc 13 hours ago 992MB 2025-11-08 14:17:59.969219 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 6b6aeb3f0d54 13 hours ago 991MB 2025-11-08 14:17:59.969222 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 1d9353637a46 13 hours ago 992MB 2025-11-08 14:17:59.969226 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 7fcfdc0cece5 13 hours ago 1.05GB 2025-11-08 14:17:59.969230 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 38ece7c6fc9a 13 hours ago 1.03GB 2025-11-08 14:17:59.969233 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 d002cac3f007 13 hours ago 1.03GB 2025-11-08 14:17:59.969237 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 0c77aa017cf5 13 hours ago 1.03GB 2025-11-08 14:17:59.969244 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 45138f6539d1 13 hours ago 1.05GB 2025-11-08 14:17:59.969247 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 11dcd0d2e656 13 hours ago 1.04GB 2025-11-08 14:17:59.969251 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 99b68c7e04e3 13 hours ago 1.04GB 2025-11-08 14:17:59.969255 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 556b766e9cb0 13 hours ago 1.09GB 2025-11-08 14:18:00.361998 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-11-08 14:18:00.368874 | orchestrator | + set -e 2025-11-08 14:18:00.368931 | orchestrator | + source /opt/manager-vars.sh 2025-11-08 14:18:00.370227 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-08 14:18:00.370245 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-08 14:18:00.370251 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-08 14:18:00.370256 | orchestrator | ++ CEPH_VERSION=reef 2025-11-08 14:18:00.370261 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-08 14:18:00.370267 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-08 14:18:00.370272 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-08 14:18:00.370302 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-08 14:18:00.370307 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-08 14:18:00.370312 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-08 14:18:00.370317 | orchestrator | ++ export ARA=false 2025-11-08 14:18:00.370325 | orchestrator | ++ ARA=false 2025-11-08 14:18:00.370331 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-08 14:18:00.370337 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-08 14:18:00.370342 | orchestrator | ++ export TEMPEST=false 2025-11-08 14:18:00.370347 | orchestrator | ++ TEMPEST=false 2025-11-08 14:18:00.370352 | orchestrator | ++ export IS_ZUUL=true 2025-11-08 14:18:00.370356 | orchestrator | ++ IS_ZUUL=true 2025-11-08 14:18:00.370361 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 14:18:00.370367 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 14:18:00.370371 | orchestrator | ++ export EXTERNAL_API=false 2025-11-08 14:18:00.370376 | orchestrator | ++ EXTERNAL_API=false 2025-11-08 14:18:00.370381 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-08 14:18:00.370386 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-08 14:18:00.370391 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-08 14:18:00.370395 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-08 14:18:00.370400 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-08 14:18:00.370405 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-08 14:18:00.370410 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-08 14:18:00.370415 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-11-08 14:18:00.377856 | orchestrator | + set -e 2025-11-08 14:18:00.377934 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-08 14:18:00.377975 | orchestrator | ++ export INTERACTIVE=false 2025-11-08 14:18:00.377993 | orchestrator | ++ INTERACTIVE=false 2025-11-08 14:18:00.378007 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-08 14:18:00.378069 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-08 14:18:00.378082 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-11-08 14:18:00.378718 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-11-08 14:18:00.382055 | orchestrator | 2025-11-08 14:18:00.382085 | orchestrator | # Ceph status 2025-11-08 14:18:00.382093 | orchestrator | 2025-11-08 14:18:00.382101 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-08 14:18:00.382109 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-08 14:18:00.382118 | orchestrator | + echo 2025-11-08 14:18:00.382125 | orchestrator | + echo '# Ceph status' 2025-11-08 14:18:00.382133 | orchestrator | + echo 2025-11-08 14:18:00.382141 | orchestrator | + ceph -s 2025-11-08 14:18:00.968568 | orchestrator | cluster: 2025-11-08 14:18:00.968703 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-11-08 14:18:00.968731 | orchestrator | health: HEALTH_OK 2025-11-08 14:18:00.968752 | orchestrator | 2025-11-08 14:18:00.968771 | orchestrator | services: 2025-11-08 14:18:00.968789 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 30m) 2025-11-08 14:18:00.968811 | orchestrator | mgr: testbed-node-1(active, since 18m), standbys: testbed-node-2, testbed-node-0 2025-11-08 14:18:00.968831 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-11-08 14:18:00.968849 | orchestrator | osd: 6 osds: 6 up (since 26m), 6 in (since 27m) 2025-11-08 14:18:00.968868 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-11-08 14:18:00.968881 | orchestrator | 2025-11-08 14:18:00.968892 | orchestrator | data: 2025-11-08 14:18:00.968903 | orchestrator | volumes: 1/1 healthy 2025-11-08 14:18:00.968914 | orchestrator | pools: 14 pools, 401 pgs 2025-11-08 14:18:00.968925 | orchestrator | objects: 524 objects, 2.2 GiB 2025-11-08 14:18:00.968936 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-11-08 14:18:00.968983 | orchestrator | pgs: 401 active+clean 2025-11-08 14:18:00.968996 | orchestrator | 2025-11-08 14:18:01.021436 | orchestrator | 2025-11-08 14:18:01.021555 | orchestrator | # Ceph versions 2025-11-08 14:18:01.021566 | orchestrator | 2025-11-08 14:18:01.021575 | orchestrator | + echo 2025-11-08 14:18:01.021582 | orchestrator | + echo '# Ceph versions' 2025-11-08 14:18:01.021590 | orchestrator | + echo 2025-11-08 14:18:01.021597 | orchestrator | + ceph versions 2025-11-08 14:18:01.712095 | orchestrator | { 2025-11-08 14:18:01.712215 | orchestrator | "mon": { 2025-11-08 14:18:01.712227 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-08 14:18:01.712238 | orchestrator | }, 2025-11-08 14:18:01.712247 | orchestrator | "mgr": { 2025-11-08 14:18:01.712256 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-08 14:18:01.712265 | orchestrator | }, 2025-11-08 14:18:01.712305 | orchestrator | "osd": { 2025-11-08 14:18:01.712314 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-11-08 14:18:01.712323 | orchestrator | }, 2025-11-08 14:18:01.712331 | orchestrator | "mds": { 2025-11-08 14:18:01.712340 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-08 14:18:01.712348 | orchestrator | }, 2025-11-08 14:18:01.712357 | orchestrator | "rgw": { 2025-11-08 14:18:01.712366 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-08 14:18:01.712374 | orchestrator | }, 2025-11-08 14:18:01.712383 | orchestrator | "overall": { 2025-11-08 14:18:01.712392 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-11-08 14:18:01.712402 | orchestrator | } 2025-11-08 14:18:01.712410 | orchestrator | } 2025-11-08 14:18:01.766436 | orchestrator | 2025-11-08 14:18:01.766550 | orchestrator | # Ceph OSD tree 2025-11-08 14:18:01.766566 | orchestrator | 2025-11-08 14:18:01.766578 | orchestrator | + echo 2025-11-08 14:18:01.766589 | orchestrator | + echo '# Ceph OSD tree' 2025-11-08 14:18:01.766602 | orchestrator | + echo 2025-11-08 14:18:01.766613 | orchestrator | + ceph osd df tree 2025-11-08 14:18:02.291638 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-11-08 14:18:02.291787 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 434 MiB 113 GiB 5.92 1.00 - root default 2025-11-08 14:18:02.291801 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 147 MiB 38 GiB 5.93 1.00 - host testbed-node-3 2025-11-08 14:18:02.291813 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.38 0.91 186 up osd.0 2025-11-08 14:18:02.291824 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.47 1.09 202 up osd.4 2025-11-08 14:18:02.291836 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-11-08 14:18:02.291847 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.67 1.13 192 up osd.2 2025-11-08 14:18:02.291858 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 987 MiB 1 KiB 70 MiB 19 GiB 5.17 0.87 200 up osd.3 2025-11-08 14:18:02.291868 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-11-08 14:18:02.291879 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.77 1.14 209 up osd.1 2025-11-08 14:18:02.291890 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 963 MiB 1 KiB 74 MiB 19 GiB 5.07 0.86 181 up osd.5 2025-11-08 14:18:02.291901 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 434 MiB 113 GiB 5.92 2025-11-08 14:18:02.291912 | orchestrator | MIN/MAX VAR: 0.86/1.14 STDDEV: 0.73 2025-11-08 14:18:02.347041 | orchestrator | 2025-11-08 14:18:02.347162 | orchestrator | # Ceph monitor status 2025-11-08 14:18:02.347179 | orchestrator | 2025-11-08 14:18:02.347191 | orchestrator | + echo 2025-11-08 14:18:02.347203 | orchestrator | + echo '# Ceph monitor status' 2025-11-08 14:18:02.347215 | orchestrator | + echo 2025-11-08 14:18:02.347226 | orchestrator | + ceph mon stat 2025-11-08 14:18:03.030366 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-11-08 14:18:03.084078 | orchestrator | 2025-11-08 14:18:03.084193 | orchestrator | # Ceph quorum status 2025-11-08 14:18:03.084214 | orchestrator | 2025-11-08 14:18:03.084232 | orchestrator | + echo 2025-11-08 14:18:03.084249 | orchestrator | + echo '# Ceph quorum status' 2025-11-08 14:18:03.084266 | orchestrator | + echo 2025-11-08 14:18:03.084689 | orchestrator | + ceph quorum_status 2025-11-08 14:18:03.084742 | orchestrator | + jq 2025-11-08 14:18:03.859616 | orchestrator | { 2025-11-08 14:18:03.859788 | orchestrator | "election_epoch": 4, 2025-11-08 14:18:03.859806 | orchestrator | "quorum": [ 2025-11-08 14:18:03.859820 | orchestrator | 0, 2025-11-08 14:18:03.859831 | orchestrator | 1, 2025-11-08 14:18:03.859842 | orchestrator | 2 2025-11-08 14:18:03.859853 | orchestrator | ], 2025-11-08 14:18:03.859864 | orchestrator | "quorum_names": [ 2025-11-08 14:18:03.859876 | orchestrator | "testbed-node-0", 2025-11-08 14:18:03.859887 | orchestrator | "testbed-node-1", 2025-11-08 14:18:03.859897 | orchestrator | "testbed-node-2" 2025-11-08 14:18:03.859908 | orchestrator | ], 2025-11-08 14:18:03.859920 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-11-08 14:18:03.859932 | orchestrator | "quorum_age": 1826, 2025-11-08 14:18:03.859943 | orchestrator | "features": { 2025-11-08 14:18:03.860008 | orchestrator | "quorum_con": "4540138322906710015", 2025-11-08 14:18:03.860026 | orchestrator | "quorum_mon": [ 2025-11-08 14:18:03.860038 | orchestrator | "kraken", 2025-11-08 14:18:03.860048 | orchestrator | "luminous", 2025-11-08 14:18:03.860059 | orchestrator | "mimic", 2025-11-08 14:18:03.860070 | orchestrator | "osdmap-prune", 2025-11-08 14:18:03.860082 | orchestrator | "nautilus", 2025-11-08 14:18:03.860093 | orchestrator | "octopus", 2025-11-08 14:18:03.860104 | orchestrator | "pacific", 2025-11-08 14:18:03.860114 | orchestrator | "elector-pinging", 2025-11-08 14:18:03.860125 | orchestrator | "quincy", 2025-11-08 14:18:03.860139 | orchestrator | "reef" 2025-11-08 14:18:03.860151 | orchestrator | ] 2025-11-08 14:18:03.860163 | orchestrator | }, 2025-11-08 14:18:03.860175 | orchestrator | "monmap": { 2025-11-08 14:18:03.860187 | orchestrator | "epoch": 1, 2025-11-08 14:18:03.860200 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-11-08 14:18:03.860213 | orchestrator | "modified": "2025-11-08T13:47:24.990806Z", 2025-11-08 14:18:03.860225 | orchestrator | "created": "2025-11-08T13:47:24.990806Z", 2025-11-08 14:18:03.860237 | orchestrator | "min_mon_release": 18, 2025-11-08 14:18:03.860250 | orchestrator | "min_mon_release_name": "reef", 2025-11-08 14:18:03.860262 | orchestrator | "election_strategy": 1, 2025-11-08 14:18:03.860275 | orchestrator | "disallowed_leaders: ": "", 2025-11-08 14:18:03.860288 | orchestrator | "stretch_mode": false, 2025-11-08 14:18:03.860300 | orchestrator | "tiebreaker_mon": "", 2025-11-08 14:18:03.860311 | orchestrator | "removed_ranks: ": "", 2025-11-08 14:18:03.860324 | orchestrator | "features": { 2025-11-08 14:18:03.860336 | orchestrator | "persistent": [ 2025-11-08 14:18:03.860348 | orchestrator | "kraken", 2025-11-08 14:18:03.860360 | orchestrator | "luminous", 2025-11-08 14:18:03.860372 | orchestrator | "mimic", 2025-11-08 14:18:03.860384 | orchestrator | "osdmap-prune", 2025-11-08 14:18:03.860396 | orchestrator | "nautilus", 2025-11-08 14:18:03.860408 | orchestrator | "octopus", 2025-11-08 14:18:03.860420 | orchestrator | "pacific", 2025-11-08 14:18:03.860432 | orchestrator | "elector-pinging", 2025-11-08 14:18:03.860444 | orchestrator | "quincy", 2025-11-08 14:18:03.860455 | orchestrator | "reef" 2025-11-08 14:18:03.860469 | orchestrator | ], 2025-11-08 14:18:03.860481 | orchestrator | "optional": [] 2025-11-08 14:18:03.860493 | orchestrator | }, 2025-11-08 14:18:03.860506 | orchestrator | "mons": [ 2025-11-08 14:18:03.860518 | orchestrator | { 2025-11-08 14:18:03.860529 | orchestrator | "rank": 0, 2025-11-08 14:18:03.860540 | orchestrator | "name": "testbed-node-0", 2025-11-08 14:18:03.860551 | orchestrator | "public_addrs": { 2025-11-08 14:18:03.860562 | orchestrator | "addrvec": [ 2025-11-08 14:18:03.860573 | orchestrator | { 2025-11-08 14:18:03.860584 | orchestrator | "type": "v2", 2025-11-08 14:18:03.860595 | orchestrator | "addr": "192.168.16.10:3300", 2025-11-08 14:18:03.860605 | orchestrator | "nonce": 0 2025-11-08 14:18:03.860616 | orchestrator | }, 2025-11-08 14:18:03.860627 | orchestrator | { 2025-11-08 14:18:03.860638 | orchestrator | "type": "v1", 2025-11-08 14:18:03.860648 | orchestrator | "addr": "192.168.16.10:6789", 2025-11-08 14:18:03.860659 | orchestrator | "nonce": 0 2025-11-08 14:18:03.860670 | orchestrator | } 2025-11-08 14:18:03.860680 | orchestrator | ] 2025-11-08 14:18:03.860691 | orchestrator | }, 2025-11-08 14:18:03.860725 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-11-08 14:18:03.860736 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-11-08 14:18:03.860747 | orchestrator | "priority": 0, 2025-11-08 14:18:03.860757 | orchestrator | "weight": 0, 2025-11-08 14:18:03.860768 | orchestrator | "crush_location": "{}" 2025-11-08 14:18:03.860779 | orchestrator | }, 2025-11-08 14:18:03.860790 | orchestrator | { 2025-11-08 14:18:03.860827 | orchestrator | "rank": 1, 2025-11-08 14:18:03.860838 | orchestrator | "name": "testbed-node-1", 2025-11-08 14:18:03.860849 | orchestrator | "public_addrs": { 2025-11-08 14:18:03.860860 | orchestrator | "addrvec": [ 2025-11-08 14:18:03.860870 | orchestrator | { 2025-11-08 14:18:03.860881 | orchestrator | "type": "v2", 2025-11-08 14:18:03.860892 | orchestrator | "addr": "192.168.16.11:3300", 2025-11-08 14:18:03.860902 | orchestrator | "nonce": 0 2025-11-08 14:18:03.860918 | orchestrator | }, 2025-11-08 14:18:03.860936 | orchestrator | { 2025-11-08 14:18:03.860976 | orchestrator | "type": "v1", 2025-11-08 14:18:03.860993 | orchestrator | "addr": "192.168.16.11:6789", 2025-11-08 14:18:03.861011 | orchestrator | "nonce": 0 2025-11-08 14:18:03.861028 | orchestrator | } 2025-11-08 14:18:03.861045 | orchestrator | ] 2025-11-08 14:18:03.861062 | orchestrator | }, 2025-11-08 14:18:03.861077 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-11-08 14:18:03.861094 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-11-08 14:18:03.861110 | orchestrator | "priority": 0, 2025-11-08 14:18:03.861127 | orchestrator | "weight": 0, 2025-11-08 14:18:03.861144 | orchestrator | "crush_location": "{}" 2025-11-08 14:18:03.861161 | orchestrator | }, 2025-11-08 14:18:03.861179 | orchestrator | { 2025-11-08 14:18:03.861197 | orchestrator | "rank": 2, 2025-11-08 14:18:03.861214 | orchestrator | "name": "testbed-node-2", 2025-11-08 14:18:03.861233 | orchestrator | "public_addrs": { 2025-11-08 14:18:03.861250 | orchestrator | "addrvec": [ 2025-11-08 14:18:03.861269 | orchestrator | { 2025-11-08 14:18:03.861288 | orchestrator | "type": "v2", 2025-11-08 14:18:03.861305 | orchestrator | "addr": "192.168.16.12:3300", 2025-11-08 14:18:03.861325 | orchestrator | "nonce": 0 2025-11-08 14:18:03.861336 | orchestrator | }, 2025-11-08 14:18:03.861347 | orchestrator | { 2025-11-08 14:18:03.861358 | orchestrator | "type": "v1", 2025-11-08 14:18:03.861368 | orchestrator | "addr": "192.168.16.12:6789", 2025-11-08 14:18:03.861379 | orchestrator | "nonce": 0 2025-11-08 14:18:03.861389 | orchestrator | } 2025-11-08 14:18:03.861400 | orchestrator | ] 2025-11-08 14:18:03.861410 | orchestrator | }, 2025-11-08 14:18:03.861421 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-11-08 14:18:03.861432 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-11-08 14:18:03.861442 | orchestrator | "priority": 0, 2025-11-08 14:18:03.861453 | orchestrator | "weight": 0, 2025-11-08 14:18:03.861463 | orchestrator | "crush_location": "{}" 2025-11-08 14:18:03.861474 | orchestrator | } 2025-11-08 14:18:03.861494 | orchestrator | ] 2025-11-08 14:18:03.861505 | orchestrator | } 2025-11-08 14:18:03.861516 | orchestrator | } 2025-11-08 14:18:03.861674 | orchestrator | 2025-11-08 14:18:03.861690 | orchestrator | # Ceph free space status 2025-11-08 14:18:03.861702 | orchestrator | 2025-11-08 14:18:03.861713 | orchestrator | + echo 2025-11-08 14:18:03.861724 | orchestrator | + echo '# Ceph free space status' 2025-11-08 14:18:03.861735 | orchestrator | + echo 2025-11-08 14:18:03.861746 | orchestrator | + ceph df 2025-11-08 14:18:04.573385 | orchestrator | --- RAW STORAGE --- 2025-11-08 14:18:04.573500 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-11-08 14:18:04.573518 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-11-08 14:18:04.573530 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-11-08 14:18:04.573541 | orchestrator | 2025-11-08 14:18:04.573554 | orchestrator | --- POOLS --- 2025-11-08 14:18:04.573566 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-11-08 14:18:04.573579 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-11-08 14:18:04.573590 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-11-08 14:18:04.573601 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-11-08 14:18:04.573612 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-11-08 14:18:04.573623 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-11-08 14:18:04.573634 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-11-08 14:18:04.573660 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-11-08 14:18:04.573671 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-11-08 14:18:04.573709 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-11-08 14:18:04.573721 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-11-08 14:18:04.573731 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-11-08 14:18:04.573742 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2025-11-08 14:18:04.573753 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-11-08 14:18:04.573763 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-11-08 14:18:04.652010 | orchestrator | ++ semver latest 5.0.0 2025-11-08 14:18:04.721997 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-08 14:18:04.722189 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-08 14:18:04.722208 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-11-08 14:18:04.722220 | orchestrator | + osism apply facts 2025-11-08 14:18:07.096080 | orchestrator | 2025-11-08 14:18:07 | INFO  | Task 929e0f0c-1ce6-41d9-b463-32a1f30e4dcb (facts) was prepared for execution. 2025-11-08 14:18:07.096217 | orchestrator | 2025-11-08 14:18:07 | INFO  | It takes a moment until task 929e0f0c-1ce6-41d9-b463-32a1f30e4dcb (facts) has been started and output is visible here. 2025-11-08 14:18:23.661179 | orchestrator | 2025-11-08 14:18:23.661308 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-08 14:18:23.661326 | orchestrator | 2025-11-08 14:18:23.661338 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-08 14:18:23.661351 | orchestrator | Saturday 08 November 2025 14:18:12 +0000 (0:00:00.328) 0:00:00.328 ***** 2025-11-08 14:18:23.661362 | orchestrator | ok: [testbed-manager] 2025-11-08 14:18:23.661376 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:23.661387 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:18:23.661397 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:18:23.661408 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:18:23.661419 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:18:23.661430 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:18:23.661440 | orchestrator | 2025-11-08 14:18:23.661452 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-08 14:18:23.661463 | orchestrator | Saturday 08 November 2025 14:18:14 +0000 (0:00:01.675) 0:00:02.004 ***** 2025-11-08 14:18:23.661474 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:18:23.661486 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:23.661497 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:18:23.661508 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:18:23.661519 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:18:23.661529 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:18:23.661540 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:18:23.661551 | orchestrator | 2025-11-08 14:18:23.661562 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-08 14:18:23.661573 | orchestrator | 2025-11-08 14:18:23.661584 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-08 14:18:23.661596 | orchestrator | Saturday 08 November 2025 14:18:15 +0000 (0:00:01.637) 0:00:03.642 ***** 2025-11-08 14:18:23.661607 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:18:23.661618 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:23.661628 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:18:23.661639 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:18:23.661650 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:18:23.661661 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:18:23.661672 | orchestrator | ok: [testbed-manager] 2025-11-08 14:18:23.661682 | orchestrator | 2025-11-08 14:18:23.661694 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-08 14:18:23.661705 | orchestrator | 2025-11-08 14:18:23.661718 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-08 14:18:23.661730 | orchestrator | Saturday 08 November 2025 14:18:22 +0000 (0:00:06.135) 0:00:09.778 ***** 2025-11-08 14:18:23.661742 | orchestrator | skipping: [testbed-manager] 2025-11-08 14:18:23.661778 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:23.661791 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:18:23.661803 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:18:23.661814 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:18:23.661826 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:18:23.661838 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:18:23.661850 | orchestrator | 2025-11-08 14:18:23.661863 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:18:23.661877 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:18:23.661890 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:18:23.661902 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:18:23.661915 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:18:23.661927 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:18:23.661937 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:18:23.661987 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:18:23.661999 | orchestrator | 2025-11-08 14:18:23.662010 | orchestrator | 2025-11-08 14:18:23.662084 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:18:23.662096 | orchestrator | Saturday 08 November 2025 14:18:22 +0000 (0:00:00.801) 0:00:10.579 ***** 2025-11-08 14:18:23.662107 | orchestrator | =============================================================================== 2025-11-08 14:18:23.662117 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.14s 2025-11-08 14:18:23.662128 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.68s 2025-11-08 14:18:23.662138 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.64s 2025-11-08 14:18:23.662149 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.80s 2025-11-08 14:18:24.339632 | orchestrator | + osism validate ceph-mons 2025-11-08 14:18:59.947457 | orchestrator | 2025-11-08 14:18:59.947560 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-11-08 14:18:59.947575 | orchestrator | 2025-11-08 14:18:59.947584 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-11-08 14:18:59.947592 | orchestrator | Saturday 08 November 2025 14:18:42 +0000 (0:00:00.486) 0:00:00.486 ***** 2025-11-08 14:18:59.947601 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:18:59.947610 | orchestrator | 2025-11-08 14:18:59.947618 | orchestrator | TASK [Create report output directory] ****************************************** 2025-11-08 14:18:59.947626 | orchestrator | Saturday 08 November 2025 14:18:43 +0000 (0:00:01.098) 0:00:01.585 ***** 2025-11-08 14:18:59.947634 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:18:59.947641 | orchestrator | 2025-11-08 14:18:59.947648 | orchestrator | TASK [Define report vars] ****************************************************** 2025-11-08 14:18:59.947655 | orchestrator | Saturday 08 November 2025 14:18:44 +0000 (0:00:01.218) 0:00:02.804 ***** 2025-11-08 14:18:59.947663 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.947671 | orchestrator | 2025-11-08 14:18:59.947679 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-11-08 14:18:59.947685 | orchestrator | Saturday 08 November 2025 14:18:44 +0000 (0:00:00.146) 0:00:02.950 ***** 2025-11-08 14:18:59.947713 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.947721 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:18:59.947728 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:18:59.947734 | orchestrator | 2025-11-08 14:18:59.947741 | orchestrator | TASK [Get container info] ****************************************************** 2025-11-08 14:18:59.947748 | orchestrator | Saturday 08 November 2025 14:18:45 +0000 (0:00:00.357) 0:00:03.308 ***** 2025-11-08 14:18:59.947755 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:18:59.947762 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.947769 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:18:59.947776 | orchestrator | 2025-11-08 14:18:59.947782 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-11-08 14:18:59.947789 | orchestrator | Saturday 08 November 2025 14:18:46 +0000 (0:00:01.156) 0:00:04.465 ***** 2025-11-08 14:18:59.947797 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.947804 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:18:59.947811 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:18:59.947818 | orchestrator | 2025-11-08 14:18:59.947825 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-11-08 14:18:59.947832 | orchestrator | Saturday 08 November 2025 14:18:46 +0000 (0:00:00.334) 0:00:04.799 ***** 2025-11-08 14:18:59.947838 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.947845 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:18:59.947852 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:18:59.947858 | orchestrator | 2025-11-08 14:18:59.947864 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-08 14:18:59.947885 | orchestrator | Saturday 08 November 2025 14:18:47 +0000 (0:00:00.624) 0:00:05.423 ***** 2025-11-08 14:18:59.947891 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.947897 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:18:59.947903 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:18:59.947909 | orchestrator | 2025-11-08 14:18:59.947915 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-11-08 14:18:59.947921 | orchestrator | Saturday 08 November 2025 14:18:47 +0000 (0:00:00.349) 0:00:05.772 ***** 2025-11-08 14:18:59.947928 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.947935 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:18:59.947941 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:18:59.947948 | orchestrator | 2025-11-08 14:18:59.947975 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-11-08 14:18:59.947981 | orchestrator | Saturday 08 November 2025 14:18:48 +0000 (0:00:00.362) 0:00:06.135 ***** 2025-11-08 14:18:59.947987 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.947993 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:18:59.948000 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:18:59.948006 | orchestrator | 2025-11-08 14:18:59.948017 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-08 14:18:59.948024 | orchestrator | Saturday 08 November 2025 14:18:48 +0000 (0:00:00.537) 0:00:06.673 ***** 2025-11-08 14:18:59.948031 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948038 | orchestrator | 2025-11-08 14:18:59.948045 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-08 14:18:59.948052 | orchestrator | Saturday 08 November 2025 14:18:48 +0000 (0:00:00.277) 0:00:06.950 ***** 2025-11-08 14:18:59.948058 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948065 | orchestrator | 2025-11-08 14:18:59.948072 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-08 14:18:59.948079 | orchestrator | Saturday 08 November 2025 14:18:49 +0000 (0:00:00.284) 0:00:07.235 ***** 2025-11-08 14:18:59.948086 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948093 | orchestrator | 2025-11-08 14:18:59.948100 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:18:59.948106 | orchestrator | Saturday 08 November 2025 14:18:49 +0000 (0:00:00.263) 0:00:07.498 ***** 2025-11-08 14:18:59.948113 | orchestrator | 2025-11-08 14:18:59.948128 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:18:59.948135 | orchestrator | Saturday 08 November 2025 14:18:49 +0000 (0:00:00.075) 0:00:07.574 ***** 2025-11-08 14:18:59.948142 | orchestrator | 2025-11-08 14:18:59.948148 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:18:59.948156 | orchestrator | Saturday 08 November 2025 14:18:49 +0000 (0:00:00.075) 0:00:07.649 ***** 2025-11-08 14:18:59.948163 | orchestrator | 2025-11-08 14:18:59.948170 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-08 14:18:59.948177 | orchestrator | Saturday 08 November 2025 14:18:49 +0000 (0:00:00.077) 0:00:07.727 ***** 2025-11-08 14:18:59.948184 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948191 | orchestrator | 2025-11-08 14:18:59.948198 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-11-08 14:18:59.948205 | orchestrator | Saturday 08 November 2025 14:18:49 +0000 (0:00:00.282) 0:00:08.009 ***** 2025-11-08 14:18:59.948212 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948218 | orchestrator | 2025-11-08 14:18:59.948242 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-11-08 14:18:59.948249 | orchestrator | Saturday 08 November 2025 14:18:50 +0000 (0:00:00.290) 0:00:08.300 ***** 2025-11-08 14:18:59.948256 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.948263 | orchestrator | 2025-11-08 14:18:59.948269 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-11-08 14:18:59.948276 | orchestrator | Saturday 08 November 2025 14:18:50 +0000 (0:00:00.147) 0:00:08.448 ***** 2025-11-08 14:18:59.948283 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:18:59.948289 | orchestrator | 2025-11-08 14:18:59.948296 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-11-08 14:18:59.948303 | orchestrator | Saturday 08 November 2025 14:18:52 +0000 (0:00:01.733) 0:00:10.181 ***** 2025-11-08 14:18:59.948310 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.948317 | orchestrator | 2025-11-08 14:18:59.948324 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-11-08 14:18:59.948330 | orchestrator | Saturday 08 November 2025 14:18:52 +0000 (0:00:00.588) 0:00:10.769 ***** 2025-11-08 14:18:59.948337 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948344 | orchestrator | 2025-11-08 14:18:59.948351 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-11-08 14:18:59.948357 | orchestrator | Saturday 08 November 2025 14:18:52 +0000 (0:00:00.147) 0:00:10.917 ***** 2025-11-08 14:18:59.948364 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.948371 | orchestrator | 2025-11-08 14:18:59.948377 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-11-08 14:18:59.948384 | orchestrator | Saturday 08 November 2025 14:18:53 +0000 (0:00:00.349) 0:00:11.267 ***** 2025-11-08 14:18:59.948391 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.948398 | orchestrator | 2025-11-08 14:18:59.948405 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-11-08 14:18:59.948412 | orchestrator | Saturday 08 November 2025 14:18:53 +0000 (0:00:00.338) 0:00:11.606 ***** 2025-11-08 14:18:59.948418 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948425 | orchestrator | 2025-11-08 14:18:59.948432 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-11-08 14:18:59.948439 | orchestrator | Saturday 08 November 2025 14:18:53 +0000 (0:00:00.114) 0:00:11.720 ***** 2025-11-08 14:18:59.948446 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.948453 | orchestrator | 2025-11-08 14:18:59.948460 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-11-08 14:18:59.948466 | orchestrator | Saturday 08 November 2025 14:18:53 +0000 (0:00:00.146) 0:00:11.866 ***** 2025-11-08 14:18:59.948473 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.948479 | orchestrator | 2025-11-08 14:18:59.948485 | orchestrator | TASK [Gather status data] ****************************************************** 2025-11-08 14:18:59.948491 | orchestrator | Saturday 08 November 2025 14:18:53 +0000 (0:00:00.132) 0:00:11.999 ***** 2025-11-08 14:18:59.948502 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:18:59.948508 | orchestrator | 2025-11-08 14:18:59.948514 | orchestrator | TASK [Set health test data] **************************************************** 2025-11-08 14:18:59.948520 | orchestrator | Saturday 08 November 2025 14:18:55 +0000 (0:00:01.342) 0:00:13.342 ***** 2025-11-08 14:18:59.948526 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.948532 | orchestrator | 2025-11-08 14:18:59.948538 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-11-08 14:18:59.948543 | orchestrator | Saturday 08 November 2025 14:18:55 +0000 (0:00:00.370) 0:00:13.712 ***** 2025-11-08 14:18:59.948549 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948556 | orchestrator | 2025-11-08 14:18:59.948563 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-11-08 14:18:59.948569 | orchestrator | Saturday 08 November 2025 14:18:55 +0000 (0:00:00.157) 0:00:13.869 ***** 2025-11-08 14:18:59.948576 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:18:59.948582 | orchestrator | 2025-11-08 14:18:59.948588 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-11-08 14:18:59.948599 | orchestrator | Saturday 08 November 2025 14:18:55 +0000 (0:00:00.187) 0:00:14.056 ***** 2025-11-08 14:18:59.948605 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948611 | orchestrator | 2025-11-08 14:18:59.948617 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-11-08 14:18:59.948623 | orchestrator | Saturday 08 November 2025 14:18:56 +0000 (0:00:00.146) 0:00:14.203 ***** 2025-11-08 14:18:59.948629 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948635 | orchestrator | 2025-11-08 14:18:59.948640 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-11-08 14:18:59.948646 | orchestrator | Saturday 08 November 2025 14:18:56 +0000 (0:00:00.371) 0:00:14.574 ***** 2025-11-08 14:18:59.948652 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:18:59.948658 | orchestrator | 2025-11-08 14:18:59.948664 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-11-08 14:18:59.948669 | orchestrator | Saturday 08 November 2025 14:18:56 +0000 (0:00:00.301) 0:00:14.876 ***** 2025-11-08 14:18:59.948675 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:18:59.948680 | orchestrator | 2025-11-08 14:18:59.948685 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-08 14:18:59.948691 | orchestrator | Saturday 08 November 2025 14:18:57 +0000 (0:00:00.326) 0:00:15.203 ***** 2025-11-08 14:18:59.948696 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:18:59.948702 | orchestrator | 2025-11-08 14:18:59.948707 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-08 14:18:59.948713 | orchestrator | Saturday 08 November 2025 14:18:59 +0000 (0:00:01.963) 0:00:17.166 ***** 2025-11-08 14:18:59.948718 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:18:59.948725 | orchestrator | 2025-11-08 14:18:59.948731 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-08 14:18:59.948736 | orchestrator | Saturday 08 November 2025 14:18:59 +0000 (0:00:00.321) 0:00:17.488 ***** 2025-11-08 14:18:59.948742 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:18:59.948752 | orchestrator | 2025-11-08 14:18:59.948767 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:19:02.982849 | orchestrator | Saturday 08 November 2025 14:18:59 +0000 (0:00:00.282) 0:00:17.770 ***** 2025-11-08 14:19:02.982928 | orchestrator | 2025-11-08 14:19:02.982935 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:19:02.982941 | orchestrator | Saturday 08 November 2025 14:18:59 +0000 (0:00:00.077) 0:00:17.848 ***** 2025-11-08 14:19:02.982946 | orchestrator | 2025-11-08 14:19:02.982993 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:19:02.982999 | orchestrator | Saturday 08 November 2025 14:18:59 +0000 (0:00:00.075) 0:00:17.923 ***** 2025-11-08 14:19:02.983020 | orchestrator | 2025-11-08 14:19:02.983026 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-11-08 14:19:02.983031 | orchestrator | Saturday 08 November 2025 14:18:59 +0000 (0:00:00.116) 0:00:18.040 ***** 2025-11-08 14:19:02.983036 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:19:02.983041 | orchestrator | 2025-11-08 14:19:02.983046 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-08 14:19:02.983051 | orchestrator | Saturday 08 November 2025 14:19:01 +0000 (0:00:01.645) 0:00:19.685 ***** 2025-11-08 14:19:02.983056 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-11-08 14:19:02.983061 | orchestrator |  "msg": [ 2025-11-08 14:19:02.983068 | orchestrator |  "Validator run completed.", 2025-11-08 14:19:02.983074 | orchestrator |  "You can find the report file here:", 2025-11-08 14:19:02.983079 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-11-08T14:18:43+00:00-report.json", 2025-11-08 14:19:02.983085 | orchestrator |  "on the following host:", 2025-11-08 14:19:02.983090 | orchestrator |  "testbed-manager" 2025-11-08 14:19:02.983095 | orchestrator |  ] 2025-11-08 14:19:02.983101 | orchestrator | } 2025-11-08 14:19:02.983106 | orchestrator | 2025-11-08 14:19:02.983111 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:19:02.983117 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-11-08 14:19:02.983123 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:19:02.983129 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:19:02.983133 | orchestrator | 2025-11-08 14:19:02.983138 | orchestrator | 2025-11-08 14:19:02.983143 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:19:02.983148 | orchestrator | Saturday 08 November 2025 14:19:02 +0000 (0:00:00.976) 0:00:20.662 ***** 2025-11-08 14:19:02.983153 | orchestrator | =============================================================================== 2025-11-08 14:19:02.983161 | orchestrator | Aggregate test results step one ----------------------------------------- 1.96s 2025-11-08 14:19:02.983169 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.73s 2025-11-08 14:19:02.983177 | orchestrator | Write report file ------------------------------------------------------- 1.65s 2025-11-08 14:19:02.983185 | orchestrator | Gather status data ------------------------------------------------------ 1.34s 2025-11-08 14:19:02.983193 | orchestrator | Create report output directory ------------------------------------------ 1.22s 2025-11-08 14:19:02.983200 | orchestrator | Get container info ------------------------------------------------------ 1.16s 2025-11-08 14:19:02.983208 | orchestrator | Get timestamp for report file ------------------------------------------- 1.10s 2025-11-08 14:19:02.983216 | orchestrator | Print report file information ------------------------------------------- 0.98s 2025-11-08 14:19:02.983224 | orchestrator | Set test result to passed if container is existing ---------------------- 0.62s 2025-11-08 14:19:02.983230 | orchestrator | Set quorum test data ---------------------------------------------------- 0.59s 2025-11-08 14:19:02.983237 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.54s 2025-11-08 14:19:02.983244 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.37s 2025-11-08 14:19:02.983252 | orchestrator | Set health test data ---------------------------------------------------- 0.37s 2025-11-08 14:19:02.983259 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.36s 2025-11-08 14:19:02.983266 | orchestrator | Prepare test data for container existance test -------------------------- 0.36s 2025-11-08 14:19:02.983274 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.35s 2025-11-08 14:19:02.983289 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2025-11-08 14:19:02.983297 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2025-11-08 14:19:02.983320 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2025-11-08 14:19:02.983330 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.33s 2025-11-08 14:19:03.390609 | orchestrator | + osism validate ceph-mgrs 2025-11-08 14:19:38.029838 | orchestrator | 2025-11-08 14:19:38.030086 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-11-08 14:19:38.030116 | orchestrator | 2025-11-08 14:19:38.030133 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-11-08 14:19:38.030143 | orchestrator | Saturday 08 November 2025 14:19:21 +0000 (0:00:00.460) 0:00:00.460 ***** 2025-11-08 14:19:38.030153 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:19:38.030163 | orchestrator | 2025-11-08 14:19:38.030171 | orchestrator | TASK [Create report output directory] ****************************************** 2025-11-08 14:19:38.030181 | orchestrator | Saturday 08 November 2025 14:19:22 +0000 (0:00:00.967) 0:00:01.427 ***** 2025-11-08 14:19:38.030190 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:19:38.030198 | orchestrator | 2025-11-08 14:19:38.030207 | orchestrator | TASK [Define report vars] ****************************************************** 2025-11-08 14:19:38.030216 | orchestrator | Saturday 08 November 2025 14:19:23 +0000 (0:00:01.564) 0:00:02.992 ***** 2025-11-08 14:19:38.030225 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:19:38.030236 | orchestrator | 2025-11-08 14:19:38.030245 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-11-08 14:19:38.030253 | orchestrator | Saturday 08 November 2025 14:19:23 +0000 (0:00:00.199) 0:00:03.192 ***** 2025-11-08 14:19:38.030262 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:19:38.030271 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:19:38.030280 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:19:38.030288 | orchestrator | 2025-11-08 14:19:38.030297 | orchestrator | TASK [Get container info] ****************************************************** 2025-11-08 14:19:38.030306 | orchestrator | Saturday 08 November 2025 14:19:24 +0000 (0:00:00.358) 0:00:03.551 ***** 2025-11-08 14:19:38.030316 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:19:38.030324 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:19:38.030333 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:19:38.030341 | orchestrator | 2025-11-08 14:19:38.030350 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-11-08 14:19:38.030359 | orchestrator | Saturday 08 November 2025 14:19:25 +0000 (0:00:01.063) 0:00:04.614 ***** 2025-11-08 14:19:38.030370 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:19:38.030380 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:19:38.030390 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:19:38.030400 | orchestrator | 2025-11-08 14:19:38.030410 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-11-08 14:19:38.030421 | orchestrator | Saturday 08 November 2025 14:19:25 +0000 (0:00:00.349) 0:00:04.964 ***** 2025-11-08 14:19:38.030431 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:19:38.030441 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:19:38.030451 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:19:38.030461 | orchestrator | 2025-11-08 14:19:38.030471 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-08 14:19:38.030481 | orchestrator | Saturday 08 November 2025 14:19:26 +0000 (0:00:00.582) 0:00:05.546 ***** 2025-11-08 14:19:38.030491 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:19:38.030501 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:19:38.030511 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:19:38.030521 | orchestrator | 2025-11-08 14:19:38.030531 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-11-08 14:19:38.030541 | orchestrator | Saturday 08 November 2025 14:19:26 +0000 (0:00:00.377) 0:00:05.923 ***** 2025-11-08 14:19:38.030575 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:19:38.030586 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:19:38.030596 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:19:38.030606 | orchestrator | 2025-11-08 14:19:38.030616 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-11-08 14:19:38.030626 | orchestrator | Saturday 08 November 2025 14:19:26 +0000 (0:00:00.310) 0:00:06.234 ***** 2025-11-08 14:19:38.030636 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:19:38.030646 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:19:38.030656 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:19:38.030665 | orchestrator | 2025-11-08 14:19:38.030679 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-08 14:19:38.030694 | orchestrator | Saturday 08 November 2025 14:19:27 +0000 (0:00:00.570) 0:00:06.804 ***** 2025-11-08 14:19:38.030708 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:19:38.030723 | orchestrator | 2025-11-08 14:19:38.030739 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-08 14:19:38.030755 | orchestrator | Saturday 08 November 2025 14:19:27 +0000 (0:00:00.264) 0:00:07.069 ***** 2025-11-08 14:19:38.030771 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:19:38.030781 | orchestrator | 2025-11-08 14:19:38.030805 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-08 14:19:38.030814 | orchestrator | Saturday 08 November 2025 14:19:28 +0000 (0:00:00.257) 0:00:07.326 ***** 2025-11-08 14:19:38.030823 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:19:38.030832 | orchestrator | 2025-11-08 14:19:38.030840 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:19:38.030849 | orchestrator | Saturday 08 November 2025 14:19:28 +0000 (0:00:00.288) 0:00:07.615 ***** 2025-11-08 14:19:38.030857 | orchestrator | 2025-11-08 14:19:38.030866 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:19:38.030875 | orchestrator | Saturday 08 November 2025 14:19:28 +0000 (0:00:00.073) 0:00:07.688 ***** 2025-11-08 14:19:38.030883 | orchestrator | 2025-11-08 14:19:38.030892 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:19:38.030901 | orchestrator | Saturday 08 November 2025 14:19:28 +0000 (0:00:00.087) 0:00:07.775 ***** 2025-11-08 14:19:38.030910 | orchestrator | 2025-11-08 14:19:38.030918 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-08 14:19:38.030927 | orchestrator | Saturday 08 November 2025 14:19:28 +0000 (0:00:00.074) 0:00:07.850 ***** 2025-11-08 14:19:38.030935 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:19:38.030944 | orchestrator | 2025-11-08 14:19:38.030953 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-11-08 14:19:38.030989 | orchestrator | Saturday 08 November 2025 14:19:28 +0000 (0:00:00.271) 0:00:08.121 ***** 2025-11-08 14:19:38.031001 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:19:38.031017 | orchestrator | 2025-11-08 14:19:38.031053 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-11-08 14:19:38.031069 | orchestrator | Saturday 08 November 2025 14:19:29 +0000 (0:00:00.320) 0:00:08.442 ***** 2025-11-08 14:19:38.031081 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:19:38.031094 | orchestrator | 2025-11-08 14:19:38.031106 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-11-08 14:19:38.031119 | orchestrator | Saturday 08 November 2025 14:19:29 +0000 (0:00:00.142) 0:00:08.584 ***** 2025-11-08 14:19:38.031131 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:19:38.031145 | orchestrator | 2025-11-08 14:19:38.031158 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-11-08 14:19:38.031171 | orchestrator | Saturday 08 November 2025 14:19:31 +0000 (0:00:02.006) 0:00:10.591 ***** 2025-11-08 14:19:38.031184 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:19:38.031197 | orchestrator | 2025-11-08 14:19:38.031210 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-11-08 14:19:38.031223 | orchestrator | Saturday 08 November 2025 14:19:31 +0000 (0:00:00.491) 0:00:11.082 ***** 2025-11-08 14:19:38.031249 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:19:38.031262 | orchestrator | 2025-11-08 14:19:38.031275 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-11-08 14:19:38.031289 | orchestrator | Saturday 08 November 2025 14:19:32 +0000 (0:00:00.367) 0:00:11.449 ***** 2025-11-08 14:19:38.031304 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:19:38.031319 | orchestrator | 2025-11-08 14:19:38.031333 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-11-08 14:19:38.031348 | orchestrator | Saturday 08 November 2025 14:19:32 +0000 (0:00:00.157) 0:00:11.607 ***** 2025-11-08 14:19:38.031363 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:19:38.031378 | orchestrator | 2025-11-08 14:19:38.031393 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-11-08 14:19:38.031407 | orchestrator | Saturday 08 November 2025 14:19:32 +0000 (0:00:00.172) 0:00:11.780 ***** 2025-11-08 14:19:38.031423 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:19:38.031437 | orchestrator | 2025-11-08 14:19:38.031452 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-11-08 14:19:38.031463 | orchestrator | Saturday 08 November 2025 14:19:32 +0000 (0:00:00.355) 0:00:12.136 ***** 2025-11-08 14:19:38.031472 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:19:38.031480 | orchestrator | 2025-11-08 14:19:38.031489 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-08 14:19:38.031498 | orchestrator | Saturday 08 November 2025 14:19:33 +0000 (0:00:00.343) 0:00:12.480 ***** 2025-11-08 14:19:38.031506 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:19:38.031515 | orchestrator | 2025-11-08 14:19:38.031523 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-08 14:19:38.031532 | orchestrator | Saturday 08 November 2025 14:19:34 +0000 (0:00:01.562) 0:00:14.042 ***** 2025-11-08 14:19:38.031540 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:19:38.031549 | orchestrator | 2025-11-08 14:19:38.031557 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-08 14:19:38.031566 | orchestrator | Saturday 08 November 2025 14:19:35 +0000 (0:00:00.315) 0:00:14.357 ***** 2025-11-08 14:19:38.031574 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:19:38.031583 | orchestrator | 2025-11-08 14:19:38.031592 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:19:38.031600 | orchestrator | Saturday 08 November 2025 14:19:35 +0000 (0:00:00.337) 0:00:14.695 ***** 2025-11-08 14:19:38.031608 | orchestrator | 2025-11-08 14:19:38.031617 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:19:38.031626 | orchestrator | Saturday 08 November 2025 14:19:35 +0000 (0:00:00.118) 0:00:14.814 ***** 2025-11-08 14:19:38.031634 | orchestrator | 2025-11-08 14:19:38.031643 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:19:38.031651 | orchestrator | Saturday 08 November 2025 14:19:35 +0000 (0:00:00.102) 0:00:14.916 ***** 2025-11-08 14:19:38.031660 | orchestrator | 2025-11-08 14:19:38.031668 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-11-08 14:19:38.031677 | orchestrator | Saturday 08 November 2025 14:19:35 +0000 (0:00:00.345) 0:00:15.262 ***** 2025-11-08 14:19:38.031685 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-08 14:19:38.031694 | orchestrator | 2025-11-08 14:19:38.031703 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-08 14:19:38.031712 | orchestrator | Saturday 08 November 2025 14:19:37 +0000 (0:00:01.588) 0:00:16.850 ***** 2025-11-08 14:19:38.031720 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-11-08 14:19:38.031729 | orchestrator |  "msg": [ 2025-11-08 14:19:38.031739 | orchestrator |  "Validator run completed.", 2025-11-08 14:19:38.031748 | orchestrator |  "You can find the report file here:", 2025-11-08 14:19:38.031765 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-11-08T14:19:21+00:00-report.json", 2025-11-08 14:19:38.031776 | orchestrator |  "on the following host:", 2025-11-08 14:19:38.031784 | orchestrator |  "testbed-manager" 2025-11-08 14:19:38.031793 | orchestrator |  ] 2025-11-08 14:19:38.031803 | orchestrator | } 2025-11-08 14:19:38.031812 | orchestrator | 2025-11-08 14:19:38.031820 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:19:38.031831 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-08 14:19:38.031842 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:19:38.031861 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:19:38.508585 | orchestrator | 2025-11-08 14:19:38.508712 | orchestrator | 2025-11-08 14:19:38.508729 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:19:38.508743 | orchestrator | Saturday 08 November 2025 14:19:38 +0000 (0:00:00.474) 0:00:17.325 ***** 2025-11-08 14:19:38.508754 | orchestrator | =============================================================================== 2025-11-08 14:19:38.508765 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.01s 2025-11-08 14:19:38.508777 | orchestrator | Write report file ------------------------------------------------------- 1.59s 2025-11-08 14:19:38.508788 | orchestrator | Create report output directory ------------------------------------------ 1.56s 2025-11-08 14:19:38.508799 | orchestrator | Aggregate test results step one ----------------------------------------- 1.56s 2025-11-08 14:19:38.508809 | orchestrator | Get container info ------------------------------------------------------ 1.06s 2025-11-08 14:19:38.508820 | orchestrator | Get timestamp for report file ------------------------------------------- 0.97s 2025-11-08 14:19:38.508845 | orchestrator | Set test result to passed if container is existing ---------------------- 0.58s 2025-11-08 14:19:38.508857 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.57s 2025-11-08 14:19:38.508877 | orchestrator | Flush handlers ---------------------------------------------------------- 0.57s 2025-11-08 14:19:38.508888 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.49s 2025-11-08 14:19:38.508899 | orchestrator | Print report file information ------------------------------------------- 0.47s 2025-11-08 14:19:38.508909 | orchestrator | Prepare test data ------------------------------------------------------- 0.38s 2025-11-08 14:19:38.508920 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.37s 2025-11-08 14:19:38.508931 | orchestrator | Prepare test data for container existance test -------------------------- 0.36s 2025-11-08 14:19:38.508942 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.36s 2025-11-08 14:19:38.508953 | orchestrator | Set test result to failed if container is missing ----------------------- 0.35s 2025-11-08 14:19:38.508986 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.34s 2025-11-08 14:19:38.508997 | orchestrator | Aggregate test results step three --------------------------------------- 0.34s 2025-11-08 14:19:38.509008 | orchestrator | Fail due to missing containers ------------------------------------------ 0.32s 2025-11-08 14:19:38.509019 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2025-11-08 14:19:38.973796 | orchestrator | + osism validate ceph-osds 2025-11-08 14:20:02.732243 | orchestrator | 2025-11-08 14:20:02.732333 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-11-08 14:20:02.732343 | orchestrator | 2025-11-08 14:20:02.732350 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-11-08 14:20:02.732358 | orchestrator | Saturday 08 November 2025 14:19:57 +0000 (0:00:00.502) 0:00:00.502 ***** 2025-11-08 14:20:02.732390 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 14:20:02.732402 | orchestrator | 2025-11-08 14:20:02.732411 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-08 14:20:02.732421 | orchestrator | Saturday 08 November 2025 14:19:58 +0000 (0:00:01.040) 0:00:01.543 ***** 2025-11-08 14:20:02.732428 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 14:20:02.732433 | orchestrator | 2025-11-08 14:20:02.732439 | orchestrator | TASK [Create report output directory] ****************************************** 2025-11-08 14:20:02.732444 | orchestrator | Saturday 08 November 2025 14:19:58 +0000 (0:00:00.713) 0:00:02.256 ***** 2025-11-08 14:20:02.732450 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 14:20:02.732456 | orchestrator | 2025-11-08 14:20:02.732475 | orchestrator | TASK [Define report vars] ****************************************************** 2025-11-08 14:20:02.732481 | orchestrator | Saturday 08 November 2025 14:19:59 +0000 (0:00:00.948) 0:00:03.205 ***** 2025-11-08 14:20:02.732488 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:02.732499 | orchestrator | 2025-11-08 14:20:02.732508 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-11-08 14:20:02.732517 | orchestrator | Saturday 08 November 2025 14:19:59 +0000 (0:00:00.146) 0:00:03.351 ***** 2025-11-08 14:20:02.732529 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:02.732538 | orchestrator | 2025-11-08 14:20:02.732547 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-11-08 14:20:02.732556 | orchestrator | Saturday 08 November 2025 14:20:00 +0000 (0:00:00.156) 0:00:03.508 ***** 2025-11-08 14:20:02.732564 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:02.732572 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:20:02.732580 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:20:02.732588 | orchestrator | 2025-11-08 14:20:02.732597 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-11-08 14:20:02.732606 | orchestrator | Saturday 08 November 2025 14:20:00 +0000 (0:00:00.421) 0:00:03.929 ***** 2025-11-08 14:20:02.732615 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:02.732624 | orchestrator | 2025-11-08 14:20:02.732634 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-11-08 14:20:02.732640 | orchestrator | Saturday 08 November 2025 14:20:00 +0000 (0:00:00.217) 0:00:04.146 ***** 2025-11-08 14:20:02.732645 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:02.732654 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:02.732663 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:02.732672 | orchestrator | 2025-11-08 14:20:02.732680 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-11-08 14:20:02.732689 | orchestrator | Saturday 08 November 2025 14:20:01 +0000 (0:00:00.397) 0:00:04.544 ***** 2025-11-08 14:20:02.732696 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:02.732705 | orchestrator | 2025-11-08 14:20:02.732714 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-08 14:20:02.732723 | orchestrator | Saturday 08 November 2025 14:20:02 +0000 (0:00:00.981) 0:00:05.526 ***** 2025-11-08 14:20:02.732732 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:02.732742 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:02.732750 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:02.732758 | orchestrator | 2025-11-08 14:20:02.732763 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-11-08 14:20:02.732770 | orchestrator | Saturday 08 November 2025 14:20:02 +0000 (0:00:00.333) 0:00:05.859 ***** 2025-11-08 14:20:02.732783 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e6bf4d7fcf8aaa9b69a2d4387365f20852bcbd5565417bc6da41cb69d132f2d3', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-11-08 14:20:02.732795 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f1fd8dcf1e25c246af435f7a9d4d0dbd8f1d78c1e65de928d12242c8e26528d4', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-11-08 14:20:02.732814 | orchestrator | skipping: [testbed-node-3] => (item={'id': '720340d2f303707e14f01ccf3dd4a6c89462d1a19f5d93cffe34557546bf7250', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-11-08 14:20:02.732827 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd319ac891fcef4a0c4d1de8de79600b4f8614808b84d51cd2b7937a55b97fa90', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-11-08 14:20:02.732837 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fc025060b80a275c47dff58de1307a497427da74b00e7ce130799aeceff69461', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-11-08 14:20:02.732865 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4e78d994fb766c1c86c31eaf8416afe52b167e31a6f909979287ecbe67b73fe2', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-11-08 14:20:02.732882 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3393d3d43714e160f70a3306146ed07e895066911a132a7958b26a28d0d70992', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-11-08 14:20:02.732892 | orchestrator | skipping: [testbed-node-3] => (item={'id': '31878ec41bcbed484a6f3354d2df5f1318ba3439b4b278ef085041c512e26d02', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2025-11-08 14:20:02.732902 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd99a1a1100459ad74a32049e29cc2aaa72ea94ec135318dc88f39a55bf101529', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 18 minutes'})  2025-11-08 14:20:02.732915 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9060a314ea586f781e244f204ed9c829ae34de15bf6812aefc104f49b99dad79', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-11-08 14:20:02.732930 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fafd34dcb82827ff183179a10b4f3be55d0fd2c252b5babe19ca421fb5d38c26', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2025-11-08 14:20:02.732940 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6121937ae3e71a48b69806f36ce7a2a5ba6a3be877dcd08fbca4c03afe1279c4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 26 minutes'})  2025-11-08 14:20:02.732949 | orchestrator | ok: [testbed-node-3] => (item={'id': '1c777f1b3ebd5ef261524be3cf8b7fefb428c2d0e36cb08e4a7124950d1ff6d6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-11-08 14:20:02.732980 | orchestrator | ok: [testbed-node-3] => (item={'id': '6ad86345579684a7e867edd5ac3f8f4e2e0f994eb9916d9422ed13a3900ba838', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-11-08 14:20:02.732990 | orchestrator | skipping: [testbed-node-3] => (item={'id': '815c3c4fadf4f2c46c5a547f239e1b773ab1e54951f5f8ce4fb1a9a719eedccc', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2025-11-08 14:20:02.732999 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1e92e5bd9b2b28af58733c6eb92334b323bb91d149765d1fccc6c9e128c321a3', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-11-08 14:20:02.733017 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ba88309a22cb93f32fa40e9ffcae91034b7173f828983759bc356d7467123c4e', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2025-11-08 14:20:02.733027 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0c76701cd884c0992275671975f0a67b7d09c7dd546506223395a3b4a045b86e', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2025-11-08 14:20:02.733035 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bba0a6ee9b2e5a488d0abcff74f2bd6fdfb0856f919fbd7184eeff07f639f48c', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2025-11-08 14:20:02.733043 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e2d21682ceb0a6b286bcf213e7dd42f3912fcb0aa422f576cc98a9d55587a283', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2025-11-08 14:20:02.733051 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd7d2e3313390dc5b840e7f20cfefc21fcde0be998f076949b22b7a01507a8209', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-11-08 14:20:02.733066 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a57a4927229777761842da8733fe82633430cfe9d6df0a5805e213e82c2e2eed', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-11-08 14:20:03.138097 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7bb12429848b95cc545662770c67f55341f39a7f4e41ed4567894b7243955de9', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-11-08 14:20:03.138173 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd91dc413ec2a2d1b18352341624855640557d824e65f575ca4252f7e83756fd2', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-11-08 14:20:03.138182 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f03cf3f0fbbe6d3cb098bf1b245eee8cfe887352e9d428c11b279160dd04a79c', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-11-08 14:20:03.138186 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5dd43947b75f75b90582ce6769cd44f6a95514c54591c2c61399bfbb2beb1baa', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-11-08 14:20:03.138192 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd90fc8ae5a894b044f6bdb51fd9113a9e9c60b5428186cd0ff25ab42dfb0b4c4', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-11-08 14:20:03.138196 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5261e8936e227d60202da3c789201f88df4091462e0e979429d3a1e370f07a84', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2025-11-08 14:20:03.138200 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1b71c1b9b6719a2362fa51492b5025c58e99b50a72044ac60a580ef11e2c31ac', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 18 minutes'})  2025-11-08 14:20:03.138205 | orchestrator | skipping: [testbed-node-4] => (item={'id': '753330c4d6fbe546bfc79c5640b358b099aa549f5c31b9f82d967cf99da859fc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-11-08 14:20:03.138232 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7437b004fed4fff5d43437444e0fc95b2fc5397ee13c9c51fbfa189d7fe1b7a4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2025-11-08 14:20:03.138237 | orchestrator | skipping: [testbed-node-4] => (item={'id': '22f31f795d4924dde439fdae4e8676b65927a951f557f269b959b7d89b431524', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 26 minutes'})  2025-11-08 14:20:03.138241 | orchestrator | ok: [testbed-node-4] => (item={'id': 'b644c3757b569ac8df1f4e9b8d236493bbbedcab35c4d608ddfa9e9e22ec143a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-11-08 14:20:03.138245 | orchestrator | ok: [testbed-node-4] => (item={'id': 'bbe4690734168eeaf973c2bdc9adf5d753764b0e7587723250670dd5c893b7d4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-11-08 14:20:03.138249 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b73dbacb00b16447d94ef840906d501da7d420c25f3543878b3a39cda2488f00', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2025-11-08 14:20:03.138254 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f3fdd6654eaf40cfb390e6a39644884d51400254ec9eb1330393d4cdbe8a36d0', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-11-08 14:20:03.138258 | orchestrator | skipping: [testbed-node-4] => (item={'id': '115ecfa2da649a0c16ab256ac7444599328060cea4f898b8b085f7f2bbd7ed36', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2025-11-08 14:20:03.138273 | orchestrator | skipping: [testbed-node-4] => (item={'id': '122a176c6f985ffb97c302f4686b657817786e934354788e9be1406f35bd094c', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2025-11-08 14:20:03.138277 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eda8eaca0bfef7e9f3c982191bb0e7a75c2c10058cea987ed1cd76937ad3a15b', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2025-11-08 14:20:03.138281 | orchestrator | skipping: [testbed-node-4] => (item={'id': '57dba0fe2605216f0aa810a4fffca1dc22c160327c1d5b757f3ee7737beafce8', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2025-11-08 14:20:03.138298 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9796fd142c06425a355e72d88e0ce3e20a6f0665beec9a4a08b299b051c4f0e4', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-11-08 14:20:03.138304 | orchestrator | skipping: [testbed-node-5] => (item={'id': '36e0f0966d88b0f604ad85a6b2a572b6c7e241060b4abc07ff001ba82b8f75ac', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-11-08 14:20:03.138308 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eedaebecf4981b43a25e682680d2585e2ebc2a17cb2b1d351fe029e453e07ce3', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-11-08 14:20:03.138312 | orchestrator | skipping: [testbed-node-5] => (item={'id': '28540afd2ad2ee41a0f8eec517ebeca8fd9e8db46f95a9df1e3e8cb85d4e7dd9', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-11-08 14:20:03.138323 | orchestrator | skipping: [testbed-node-5] => (item={'id': '748569e92acc82f669b324b3a8ea2d61c195b580a4a524d7dcb2329d23ddd912', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-11-08 14:20:03.138327 | orchestrator | skipping: [testbed-node-5] => (item={'id': '94d84b362769364dec2fc6350d8edb5c8f89e9b571b06b8ffa8a480c5450b26a', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-11-08 14:20:03.138331 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4208f9a361381ee57f08b26d6ef9b3ccd612d5604629087be5007175e166a1fa', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-11-08 14:20:03.138334 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8c57cd8550182de72690c04cda44d4efef8c4d8fb6e540c7f3896d7734a20482', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2025-11-08 14:20:03.138338 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8d98cf0be370b30656348b1198da8ba79a73f3e6ffd2b38f79de160dbd4a5959', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 18 minutes'})  2025-11-08 14:20:03.138342 | orchestrator | skipping: [testbed-node-5] => (item={'id': '37788e533d01f1d0d7164fea732f2a592764ca62114b8b0e7c34423a81d592bd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-11-08 14:20:03.138346 | orchestrator | skipping: [testbed-node-5] => (item={'id': '013441ea8b77bf5ed13f3acfa211c8dfed2b55daa6fd7910b6fa265cb976a14f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2025-11-08 14:20:03.138350 | orchestrator | skipping: [testbed-node-5] => (item={'id': '471cec21f6ed721e2e834e82e11a7a4adb400acd12a461f49ff88db5e4a62ff0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 26 minutes'})  2025-11-08 14:20:03.138358 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c9c93dc9f01d0c007494495ed96cb533a8aab20546fe5a9d039f1846d62f86ef', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-11-08 14:20:13.618175 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ba20e9d9ff4d701dc999a50e7746c0e06672f526bb30787b8e51867a21bf1aed', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 27 minutes'}) 2025-11-08 14:20:13.619227 | orchestrator | skipping: [testbed-node-5] => (item={'id': '64353dbe81299fd5c9f061dd5562af16a6c236101d454562040d15d2afaabf67', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2025-11-08 14:20:13.619271 | orchestrator | skipping: [testbed-node-5] => (item={'id': '510b46b870ac151986c0a52b556dc6d0d04c865886d39643c42f86e05cd490fd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2025-11-08 14:20:13.619302 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c31f1ff3afd35603d7ac8313bcaa3d206615cb54c349f6d54672360165e3158b', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2025-11-08 14:20:13.619315 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8d0661a3c023a3c8ee3e04c468762c2c9b503e649c59ca75b2a83561375f9f42', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2025-11-08 14:20:13.619347 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b1dcce3c7d570746c9b1cf36aaa0d683fbc7e43dd1108741262d052c7766a636', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2025-11-08 14:20:13.619358 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2592bc42a48d02608f66c05bd7724dba9ba945f354c8abe7fddffa3dd7614bb8', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 33 minutes'})  2025-11-08 14:20:13.619368 | orchestrator | 2025-11-08 14:20:13.619381 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-11-08 14:20:13.619393 | orchestrator | Saturday 08 November 2025 14:20:03 +0000 (0:00:00.714) 0:00:06.574 ***** 2025-11-08 14:20:13.619402 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.619413 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:13.619423 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:13.619432 | orchestrator | 2025-11-08 14:20:13.619442 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-11-08 14:20:13.619452 | orchestrator | Saturday 08 November 2025 14:20:03 +0000 (0:00:00.367) 0:00:06.942 ***** 2025-11-08 14:20:13.619462 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:13.619472 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:20:13.619482 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:20:13.619492 | orchestrator | 2025-11-08 14:20:13.619501 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-11-08 14:20:13.619511 | orchestrator | Saturday 08 November 2025 14:20:04 +0000 (0:00:00.662) 0:00:07.604 ***** 2025-11-08 14:20:13.619521 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.619530 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:13.619540 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:13.619549 | orchestrator | 2025-11-08 14:20:13.619559 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-08 14:20:13.619568 | orchestrator | Saturday 08 November 2025 14:20:04 +0000 (0:00:00.365) 0:00:07.970 ***** 2025-11-08 14:20:13.619578 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.619588 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:13.619597 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:13.619607 | orchestrator | 2025-11-08 14:20:13.619616 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-11-08 14:20:13.619626 | orchestrator | Saturday 08 November 2025 14:20:04 +0000 (0:00:00.346) 0:00:08.316 ***** 2025-11-08 14:20:13.619636 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-11-08 14:20:13.619647 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-11-08 14:20:13.619656 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:13.619666 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-11-08 14:20:13.619676 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-11-08 14:20:13.619686 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:20:13.619696 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-11-08 14:20:13.619706 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-11-08 14:20:13.619716 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:20:13.619725 | orchestrator | 2025-11-08 14:20:13.619735 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-11-08 14:20:13.619745 | orchestrator | Saturday 08 November 2025 14:20:05 +0000 (0:00:00.432) 0:00:08.749 ***** 2025-11-08 14:20:13.619754 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.619764 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:13.619773 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:13.619790 | orchestrator | 2025-11-08 14:20:13.619823 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-11-08 14:20:13.619833 | orchestrator | Saturday 08 November 2025 14:20:05 +0000 (0:00:00.666) 0:00:09.415 ***** 2025-11-08 14:20:13.619843 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:13.619852 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:20:13.619862 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:20:13.619872 | orchestrator | 2025-11-08 14:20:13.619881 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-11-08 14:20:13.619891 | orchestrator | Saturday 08 November 2025 14:20:06 +0000 (0:00:00.340) 0:00:09.756 ***** 2025-11-08 14:20:13.619901 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:13.619910 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:20:13.619920 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:20:13.619929 | orchestrator | 2025-11-08 14:20:13.619939 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-11-08 14:20:13.619948 | orchestrator | Saturday 08 November 2025 14:20:06 +0000 (0:00:00.302) 0:00:10.059 ***** 2025-11-08 14:20:13.619987 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.619998 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:13.620008 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:13.620017 | orchestrator | 2025-11-08 14:20:13.620027 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-08 14:20:13.620036 | orchestrator | Saturday 08 November 2025 14:20:06 +0000 (0:00:00.315) 0:00:10.374 ***** 2025-11-08 14:20:13.620046 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:13.620056 | orchestrator | 2025-11-08 14:20:13.620066 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-08 14:20:13.620081 | orchestrator | Saturday 08 November 2025 14:20:07 +0000 (0:00:00.879) 0:00:11.254 ***** 2025-11-08 14:20:13.620091 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:13.620100 | orchestrator | 2025-11-08 14:20:13.620110 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-08 14:20:13.620120 | orchestrator | Saturday 08 November 2025 14:20:08 +0000 (0:00:00.292) 0:00:11.546 ***** 2025-11-08 14:20:13.620129 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:13.620139 | orchestrator | 2025-11-08 14:20:13.620148 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:20:13.620158 | orchestrator | Saturday 08 November 2025 14:20:08 +0000 (0:00:00.300) 0:00:11.847 ***** 2025-11-08 14:20:13.620168 | orchestrator | 2025-11-08 14:20:13.620177 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:20:13.620187 | orchestrator | Saturday 08 November 2025 14:20:08 +0000 (0:00:00.071) 0:00:11.918 ***** 2025-11-08 14:20:13.620197 | orchestrator | 2025-11-08 14:20:13.620206 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:20:13.620216 | orchestrator | Saturday 08 November 2025 14:20:08 +0000 (0:00:00.074) 0:00:11.993 ***** 2025-11-08 14:20:13.620225 | orchestrator | 2025-11-08 14:20:13.620235 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-08 14:20:13.620244 | orchestrator | Saturday 08 November 2025 14:20:08 +0000 (0:00:00.091) 0:00:12.085 ***** 2025-11-08 14:20:13.620254 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:13.620263 | orchestrator | 2025-11-08 14:20:13.620273 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-11-08 14:20:13.620283 | orchestrator | Saturday 08 November 2025 14:20:08 +0000 (0:00:00.256) 0:00:12.341 ***** 2025-11-08 14:20:13.620292 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:13.620302 | orchestrator | 2025-11-08 14:20:13.620312 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-08 14:20:13.620321 | orchestrator | Saturday 08 November 2025 14:20:09 +0000 (0:00:00.290) 0:00:12.632 ***** 2025-11-08 14:20:13.620331 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.620400 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:13.620412 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:13.620429 | orchestrator | 2025-11-08 14:20:13.620443 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-11-08 14:20:13.620460 | orchestrator | Saturday 08 November 2025 14:20:09 +0000 (0:00:00.331) 0:00:12.963 ***** 2025-11-08 14:20:13.620477 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.620499 | orchestrator | 2025-11-08 14:20:13.620518 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-11-08 14:20:13.620534 | orchestrator | Saturday 08 November 2025 14:20:10 +0000 (0:00:00.894) 0:00:13.858 ***** 2025-11-08 14:20:13.620549 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-08 14:20:13.620564 | orchestrator | 2025-11-08 14:20:13.620579 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-11-08 14:20:13.620595 | orchestrator | Saturday 08 November 2025 14:20:12 +0000 (0:00:01.912) 0:00:15.771 ***** 2025-11-08 14:20:13.620610 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.620626 | orchestrator | 2025-11-08 14:20:13.620643 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-11-08 14:20:13.620659 | orchestrator | Saturday 08 November 2025 14:20:12 +0000 (0:00:00.160) 0:00:15.931 ***** 2025-11-08 14:20:13.620675 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.620688 | orchestrator | 2025-11-08 14:20:13.620697 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-11-08 14:20:13.620707 | orchestrator | Saturday 08 November 2025 14:20:12 +0000 (0:00:00.445) 0:00:16.377 ***** 2025-11-08 14:20:13.620716 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:13.620726 | orchestrator | 2025-11-08 14:20:13.620781 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-11-08 14:20:13.620792 | orchestrator | Saturday 08 November 2025 14:20:13 +0000 (0:00:00.135) 0:00:16.512 ***** 2025-11-08 14:20:13.620802 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.620812 | orchestrator | 2025-11-08 14:20:13.620822 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-08 14:20:13.620831 | orchestrator | Saturday 08 November 2025 14:20:13 +0000 (0:00:00.172) 0:00:16.685 ***** 2025-11-08 14:20:13.620841 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:13.620850 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:13.620860 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:13.620870 | orchestrator | 2025-11-08 14:20:13.620879 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-11-08 14:20:13.620901 | orchestrator | Saturday 08 November 2025 14:20:13 +0000 (0:00:00.375) 0:00:17.060 ***** 2025-11-08 14:20:27.244211 | orchestrator | changed: [testbed-node-3] 2025-11-08 14:20:27.244350 | orchestrator | changed: [testbed-node-4] 2025-11-08 14:20:27.244375 | orchestrator | changed: [testbed-node-5] 2025-11-08 14:20:27.244396 | orchestrator | 2025-11-08 14:20:27.244415 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-11-08 14:20:27.244436 | orchestrator | Saturday 08 November 2025 14:20:16 +0000 (0:00:02.831) 0:00:19.891 ***** 2025-11-08 14:20:27.244455 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:27.244475 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:27.244493 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:27.244510 | orchestrator | 2025-11-08 14:20:27.244529 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-11-08 14:20:27.244546 | orchestrator | Saturday 08 November 2025 14:20:16 +0000 (0:00:00.401) 0:00:20.292 ***** 2025-11-08 14:20:27.244564 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:27.244581 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:27.244598 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:27.244616 | orchestrator | 2025-11-08 14:20:27.244639 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-11-08 14:20:27.244656 | orchestrator | Saturday 08 November 2025 14:20:17 +0000 (0:00:00.585) 0:00:20.878 ***** 2025-11-08 14:20:27.244672 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:27.244692 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:20:27.244710 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:20:27.244763 | orchestrator | 2025-11-08 14:20:27.244784 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-11-08 14:20:27.244803 | orchestrator | Saturday 08 November 2025 14:20:17 +0000 (0:00:00.358) 0:00:21.237 ***** 2025-11-08 14:20:27.244823 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:27.244843 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:27.244862 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:27.244881 | orchestrator | 2025-11-08 14:20:27.244900 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-11-08 14:20:27.244917 | orchestrator | Saturday 08 November 2025 14:20:18 +0000 (0:00:00.622) 0:00:21.860 ***** 2025-11-08 14:20:27.244935 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:27.244952 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:20:27.245002 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:20:27.245021 | orchestrator | 2025-11-08 14:20:27.245040 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-11-08 14:20:27.245058 | orchestrator | Saturday 08 November 2025 14:20:18 +0000 (0:00:00.320) 0:00:22.180 ***** 2025-11-08 14:20:27.245074 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:27.245091 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:20:27.245109 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:20:27.245127 | orchestrator | 2025-11-08 14:20:27.245143 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-08 14:20:27.245162 | orchestrator | Saturday 08 November 2025 14:20:19 +0000 (0:00:00.325) 0:00:22.506 ***** 2025-11-08 14:20:27.245181 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:27.245200 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:27.245219 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:27.245237 | orchestrator | 2025-11-08 14:20:27.245255 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-11-08 14:20:27.245275 | orchestrator | Saturday 08 November 2025 14:20:19 +0000 (0:00:00.539) 0:00:23.045 ***** 2025-11-08 14:20:27.245293 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:27.245311 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:27.245330 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:27.245350 | orchestrator | 2025-11-08 14:20:27.245370 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-11-08 14:20:27.245389 | orchestrator | Saturday 08 November 2025 14:20:20 +0000 (0:00:00.908) 0:00:23.954 ***** 2025-11-08 14:20:27.245409 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:27.245427 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:27.245444 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:27.245461 | orchestrator | 2025-11-08 14:20:27.245544 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-11-08 14:20:27.245567 | orchestrator | Saturday 08 November 2025 14:20:20 +0000 (0:00:00.345) 0:00:24.299 ***** 2025-11-08 14:20:27.245584 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:27.245600 | orchestrator | skipping: [testbed-node-4] 2025-11-08 14:20:27.245611 | orchestrator | skipping: [testbed-node-5] 2025-11-08 14:20:27.245622 | orchestrator | 2025-11-08 14:20:27.245633 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-11-08 14:20:27.245643 | orchestrator | Saturday 08 November 2025 14:20:21 +0000 (0:00:00.348) 0:00:24.648 ***** 2025-11-08 14:20:27.245654 | orchestrator | ok: [testbed-node-3] 2025-11-08 14:20:27.245668 | orchestrator | ok: [testbed-node-4] 2025-11-08 14:20:27.245685 | orchestrator | ok: [testbed-node-5] 2025-11-08 14:20:27.245703 | orchestrator | 2025-11-08 14:20:27.245721 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-11-08 14:20:27.245739 | orchestrator | Saturday 08 November 2025 14:20:21 +0000 (0:00:00.584) 0:00:25.233 ***** 2025-11-08 14:20:27.245758 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 14:20:27.245776 | orchestrator | 2025-11-08 14:20:27.245791 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-11-08 14:20:27.245802 | orchestrator | Saturday 08 November 2025 14:20:22 +0000 (0:00:00.288) 0:00:25.521 ***** 2025-11-08 14:20:27.245828 | orchestrator | skipping: [testbed-node-3] 2025-11-08 14:20:27.245838 | orchestrator | 2025-11-08 14:20:27.245849 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-08 14:20:27.245860 | orchestrator | Saturday 08 November 2025 14:20:22 +0000 (0:00:00.285) 0:00:25.806 ***** 2025-11-08 14:20:27.245871 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 14:20:27.245881 | orchestrator | 2025-11-08 14:20:27.245892 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-08 14:20:27.245903 | orchestrator | Saturday 08 November 2025 14:20:24 +0000 (0:00:01.776) 0:00:27.583 ***** 2025-11-08 14:20:27.245914 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 14:20:27.245924 | orchestrator | 2025-11-08 14:20:27.245935 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-08 14:20:27.245946 | orchestrator | Saturday 08 November 2025 14:20:24 +0000 (0:00:00.270) 0:00:27.854 ***** 2025-11-08 14:20:27.246010 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 14:20:27.246098 | orchestrator | 2025-11-08 14:20:27.246109 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:20:27.246120 | orchestrator | Saturday 08 November 2025 14:20:24 +0000 (0:00:00.272) 0:00:28.127 ***** 2025-11-08 14:20:27.246131 | orchestrator | 2025-11-08 14:20:27.246142 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:20:27.246152 | orchestrator | Saturday 08 November 2025 14:20:24 +0000 (0:00:00.080) 0:00:28.207 ***** 2025-11-08 14:20:27.246163 | orchestrator | 2025-11-08 14:20:27.246174 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-08 14:20:27.246184 | orchestrator | Saturday 08 November 2025 14:20:24 +0000 (0:00:00.076) 0:00:28.283 ***** 2025-11-08 14:20:27.246195 | orchestrator | 2025-11-08 14:20:27.246206 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-11-08 14:20:27.246216 | orchestrator | Saturday 08 November 2025 14:20:24 +0000 (0:00:00.080) 0:00:28.363 ***** 2025-11-08 14:20:27.246227 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-08 14:20:27.246237 | orchestrator | 2025-11-08 14:20:27.246248 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-08 14:20:27.246259 | orchestrator | Saturday 08 November 2025 14:20:26 +0000 (0:00:01.495) 0:00:29.859 ***** 2025-11-08 14:20:27.246277 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-11-08 14:20:27.246288 | orchestrator |  "msg": [ 2025-11-08 14:20:27.246300 | orchestrator |  "Validator run completed.", 2025-11-08 14:20:27.246312 | orchestrator |  "You can find the report file here:", 2025-11-08 14:20:27.246321 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-11-08T14:19:57+00:00-report.json", 2025-11-08 14:20:27.246332 | orchestrator |  "on the following host:", 2025-11-08 14:20:27.246341 | orchestrator |  "testbed-manager" 2025-11-08 14:20:27.246351 | orchestrator |  ] 2025-11-08 14:20:27.246361 | orchestrator | } 2025-11-08 14:20:27.246371 | orchestrator | 2025-11-08 14:20:27.246381 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:20:27.246392 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-11-08 14:20:27.246404 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-08 14:20:27.246414 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-08 14:20:27.246423 | orchestrator | 2025-11-08 14:20:27.246433 | orchestrator | 2025-11-08 14:20:27.246443 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:20:27.246452 | orchestrator | Saturday 08 November 2025 14:20:26 +0000 (0:00:00.570) 0:00:30.429 ***** 2025-11-08 14:20:27.246471 | orchestrator | =============================================================================== 2025-11-08 14:20:27.246480 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.83s 2025-11-08 14:20:27.246490 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.91s 2025-11-08 14:20:27.246499 | orchestrator | Aggregate test results step one ----------------------------------------- 1.78s 2025-11-08 14:20:27.246509 | orchestrator | Write report file ------------------------------------------------------- 1.50s 2025-11-08 14:20:27.246518 | orchestrator | Get timestamp for report file ------------------------------------------- 1.04s 2025-11-08 14:20:27.246527 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.98s 2025-11-08 14:20:27.246537 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2025-11-08 14:20:27.246546 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.91s 2025-11-08 14:20:27.246555 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.89s 2025-11-08 14:20:27.246565 | orchestrator | Aggregate test results step one ----------------------------------------- 0.88s 2025-11-08 14:20:27.246574 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.71s 2025-11-08 14:20:27.246584 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2025-11-08 14:20:27.246593 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.67s 2025-11-08 14:20:27.246602 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.66s 2025-11-08 14:20:27.246612 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.62s 2025-11-08 14:20:27.246621 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.59s 2025-11-08 14:20:27.246630 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.58s 2025-11-08 14:20:27.246640 | orchestrator | Print report file information ------------------------------------------- 0.57s 2025-11-08 14:20:27.246649 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2025-11-08 14:20:27.246659 | orchestrator | Get OSDs that are not up or in ------------------------------------------ 0.45s 2025-11-08 14:20:27.501896 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-11-08 14:20:27.510646 | orchestrator | + set -e 2025-11-08 14:20:27.510724 | orchestrator | + source /opt/manager-vars.sh 2025-11-08 14:20:27.510738 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-08 14:20:27.510750 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-08 14:20:27.510760 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-08 14:20:27.510771 | orchestrator | ++ CEPH_VERSION=reef 2025-11-08 14:20:27.510782 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-08 14:20:27.510795 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-08 14:20:27.510806 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-08 14:20:27.510817 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-08 14:20:27.510828 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-08 14:20:27.510839 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-08 14:20:27.510850 | orchestrator | ++ export ARA=false 2025-11-08 14:20:27.510861 | orchestrator | ++ ARA=false 2025-11-08 14:20:27.510872 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-08 14:20:27.510883 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-08 14:20:27.510893 | orchestrator | ++ export TEMPEST=false 2025-11-08 14:20:27.510904 | orchestrator | ++ TEMPEST=false 2025-11-08 14:20:27.510915 | orchestrator | ++ export IS_ZUUL=true 2025-11-08 14:20:27.510925 | orchestrator | ++ IS_ZUUL=true 2025-11-08 14:20:27.510936 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 14:20:27.510947 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.186 2025-11-08 14:20:27.510957 | orchestrator | ++ export EXTERNAL_API=false 2025-11-08 14:20:27.510994 | orchestrator | ++ EXTERNAL_API=false 2025-11-08 14:20:27.511004 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-08 14:20:27.511015 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-08 14:20:27.511026 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-08 14:20:27.511036 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-08 14:20:27.511047 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-08 14:20:27.511082 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-08 14:20:27.511093 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-11-08 14:20:27.511103 | orchestrator | + source /etc/os-release 2025-11-08 14:20:27.511114 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-11-08 14:20:27.511125 | orchestrator | ++ NAME=Ubuntu 2025-11-08 14:20:27.511135 | orchestrator | ++ VERSION_ID=24.04 2025-11-08 14:20:27.511146 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-11-08 14:20:27.511157 | orchestrator | ++ VERSION_CODENAME=noble 2025-11-08 14:20:27.511168 | orchestrator | ++ ID=ubuntu 2025-11-08 14:20:27.511179 | orchestrator | ++ ID_LIKE=debian 2025-11-08 14:20:27.511281 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-11-08 14:20:27.511296 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-11-08 14:20:27.511308 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-11-08 14:20:27.511323 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-11-08 14:20:27.511349 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-11-08 14:20:27.511362 | orchestrator | ++ LOGO=ubuntu-logo 2025-11-08 14:20:27.511374 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-11-08 14:20:27.511387 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-11-08 14:20:27.511413 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-11-08 14:20:27.536903 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-11-08 14:20:53.186566 | orchestrator | 2025-11-08 14:20:53.186667 | orchestrator | # Status of Elasticsearch 2025-11-08 14:20:53.186681 | orchestrator | 2025-11-08 14:20:53.186689 | orchestrator | + pushd /opt/configuration/contrib 2025-11-08 14:20:53.186700 | orchestrator | + echo 2025-11-08 14:20:53.186709 | orchestrator | + echo '# Status of Elasticsearch' 2025-11-08 14:20:53.186717 | orchestrator | + echo 2025-11-08 14:20:53.186725 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-11-08 14:20:53.382703 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-11-08 14:20:53.382819 | orchestrator | 2025-11-08 14:20:53.382835 | orchestrator | # Status of MariaDB 2025-11-08 14:20:53.382849 | orchestrator | 2025-11-08 14:20:53.382860 | orchestrator | + echo 2025-11-08 14:20:53.382873 | orchestrator | + echo '# Status of MariaDB' 2025-11-08 14:20:53.382884 | orchestrator | + echo 2025-11-08 14:20:53.382895 | orchestrator | + MARIADB_USER=root_shard_0 2025-11-08 14:20:53.382907 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-11-08 14:20:53.454918 | orchestrator | Reading package lists... 2025-11-08 14:20:53.871109 | orchestrator | Building dependency tree... 2025-11-08 14:20:53.871358 | orchestrator | Reading state information... 2025-11-08 14:20:54.410669 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-11-08 14:20:54.410771 | orchestrator | bc set to manually installed. 2025-11-08 14:20:54.410786 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-11-08 14:20:55.125535 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-11-08 14:20:55.125667 | orchestrator | 2025-11-08 14:20:55.125684 | orchestrator | # Status of Prometheus 2025-11-08 14:20:55.125697 | orchestrator | 2025-11-08 14:20:55.125709 | orchestrator | + echo 2025-11-08 14:20:55.125720 | orchestrator | + echo '# Status of Prometheus' 2025-11-08 14:20:55.125731 | orchestrator | + echo 2025-11-08 14:20:55.125743 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-11-08 14:20:55.191094 | orchestrator | Unauthorized 2025-11-08 14:20:55.195286 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-11-08 14:20:55.247359 | orchestrator | Unauthorized 2025-11-08 14:20:55.255895 | orchestrator | 2025-11-08 14:20:55.256009 | orchestrator | # Status of RabbitMQ 2025-11-08 14:20:55.256024 | orchestrator | 2025-11-08 14:20:55.256035 | orchestrator | + echo 2025-11-08 14:20:55.256046 | orchestrator | + echo '# Status of RabbitMQ' 2025-11-08 14:20:55.256057 | orchestrator | + echo 2025-11-08 14:20:55.256068 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-11-08 14:20:55.857747 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-11-08 14:20:55.868190 | orchestrator | 2025-11-08 14:20:55.868285 | orchestrator | # Status of Redis 2025-11-08 14:20:55.868299 | orchestrator | 2025-11-08 14:20:55.868312 | orchestrator | + echo 2025-11-08 14:20:55.868324 | orchestrator | + echo '# Status of Redis' 2025-11-08 14:20:55.868336 | orchestrator | + echo 2025-11-08 14:20:55.868349 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-11-08 14:20:55.873807 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001989s;;;0.000000;10.000000 2025-11-08 14:20:55.874324 | orchestrator | 2025-11-08 14:20:55.874366 | orchestrator | # Create backup of MariaDB database 2025-11-08 14:20:55.874386 | orchestrator | 2025-11-08 14:20:55.874405 | orchestrator | + popd 2025-11-08 14:20:55.874422 | orchestrator | + echo 2025-11-08 14:20:55.874441 | orchestrator | + echo '# Create backup of MariaDB database' 2025-11-08 14:20:55.874461 | orchestrator | + echo 2025-11-08 14:20:55.874480 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-11-08 14:20:58.054448 | orchestrator | 2025-11-08 14:20:58 | INFO  | Task b29a7421-aa22-4beb-b46f-5fda11828455 (mariadb_backup) was prepared for execution. 2025-11-08 14:20:58.054557 | orchestrator | 2025-11-08 14:20:58 | INFO  | It takes a moment until task b29a7421-aa22-4beb-b46f-5fda11828455 (mariadb_backup) has been started and output is visible here. 2025-11-08 14:21:58.378731 | orchestrator | 2025-11-08 14:21:58.378868 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-08 14:21:58.378884 | orchestrator | 2025-11-08 14:21:58.378897 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-08 14:21:58.378908 | orchestrator | Saturday 08 November 2025 14:21:02 +0000 (0:00:00.183) 0:00:00.183 ***** 2025-11-08 14:21:58.378920 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:21:58.378931 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:21:58.378942 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:21:58.378953 | orchestrator | 2025-11-08 14:21:58.379031 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-08 14:21:58.379044 | orchestrator | Saturday 08 November 2025 14:21:02 +0000 (0:00:00.358) 0:00:00.542 ***** 2025-11-08 14:21:58.379056 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-11-08 14:21:58.379067 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-11-08 14:21:58.379078 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-11-08 14:21:58.379089 | orchestrator | 2025-11-08 14:21:58.379100 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-11-08 14:21:58.379111 | orchestrator | 2025-11-08 14:21:58.379123 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-11-08 14:21:58.379135 | orchestrator | Saturday 08 November 2025 14:21:03 +0000 (0:00:00.734) 0:00:01.276 ***** 2025-11-08 14:21:58.379145 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-08 14:21:58.379156 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-08 14:21:58.379167 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-08 14:21:58.379178 | orchestrator | 2025-11-08 14:21:58.379189 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-08 14:21:58.379200 | orchestrator | Saturday 08 November 2025 14:21:04 +0000 (0:00:00.456) 0:00:01.733 ***** 2025-11-08 14:21:58.379230 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-08 14:21:58.379245 | orchestrator | 2025-11-08 14:21:58.379257 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-11-08 14:21:58.379271 | orchestrator | Saturday 08 November 2025 14:21:04 +0000 (0:00:00.591) 0:00:02.324 ***** 2025-11-08 14:21:58.379290 | orchestrator | ok: [testbed-node-0] 2025-11-08 14:21:58.379310 | orchestrator | ok: [testbed-node-1] 2025-11-08 14:21:58.379328 | orchestrator | ok: [testbed-node-2] 2025-11-08 14:21:58.379345 | orchestrator | 2025-11-08 14:21:58.379363 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-11-08 14:21:58.379414 | orchestrator | Saturday 08 November 2025 14:21:08 +0000 (0:00:03.697) 0:00:06.022 ***** 2025-11-08 14:21:58.379435 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-11-08 14:21:58.379454 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-11-08 14:21:58.379474 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-08 14:21:58.379493 | orchestrator | mariadb_bootstrap_restart 2025-11-08 14:21:58.379513 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:21:58.379533 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:21:58.379552 | orchestrator | changed: [testbed-node-0] 2025-11-08 14:21:58.379571 | orchestrator | 2025-11-08 14:21:58.379591 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-11-08 14:21:58.379609 | orchestrator | skipping: no hosts matched 2025-11-08 14:21:58.379626 | orchestrator | 2025-11-08 14:21:58.379645 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-08 14:21:58.379661 | orchestrator | skipping: no hosts matched 2025-11-08 14:21:58.379672 | orchestrator | 2025-11-08 14:21:58.379682 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-11-08 14:21:58.379693 | orchestrator | skipping: no hosts matched 2025-11-08 14:21:58.379703 | orchestrator | 2025-11-08 14:21:58.379714 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-11-08 14:21:58.379725 | orchestrator | 2025-11-08 14:21:58.379735 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-11-08 14:21:58.379746 | orchestrator | Saturday 08 November 2025 14:21:57 +0000 (0:00:48.687) 0:00:54.709 ***** 2025-11-08 14:21:58.379756 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:21:58.379767 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:21:58.379778 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:21:58.379788 | orchestrator | 2025-11-08 14:21:58.379799 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-11-08 14:21:58.379810 | orchestrator | Saturday 08 November 2025 14:21:57 +0000 (0:00:00.356) 0:00:55.066 ***** 2025-11-08 14:21:58.379820 | orchestrator | skipping: [testbed-node-0] 2025-11-08 14:21:58.379831 | orchestrator | skipping: [testbed-node-1] 2025-11-08 14:21:58.379841 | orchestrator | skipping: [testbed-node-2] 2025-11-08 14:21:58.379852 | orchestrator | 2025-11-08 14:21:58.379862 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:21:58.379874 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-08 14:21:58.379887 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-08 14:21:58.379898 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-08 14:21:58.379909 | orchestrator | 2025-11-08 14:21:58.379920 | orchestrator | 2025-11-08 14:21:58.379930 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:21:58.379941 | orchestrator | Saturday 08 November 2025 14:21:57 +0000 (0:00:00.457) 0:00:55.524 ***** 2025-11-08 14:21:58.379952 | orchestrator | =============================================================================== 2025-11-08 14:21:58.379989 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 48.69s 2025-11-08 14:21:58.380021 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.70s 2025-11-08 14:21:58.380032 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-11-08 14:21:58.380043 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.59s 2025-11-08 14:21:58.380053 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.46s 2025-11-08 14:21:58.380064 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.46s 2025-11-08 14:21:58.380086 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-11-08 14:21:58.380097 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.36s 2025-11-08 14:21:58.770524 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-11-08 14:21:58.777110 | orchestrator | + set -e 2025-11-08 14:21:58.777168 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-08 14:21:58.777182 | orchestrator | ++ export INTERACTIVE=false 2025-11-08 14:21:58.777195 | orchestrator | ++ INTERACTIVE=false 2025-11-08 14:21:58.777206 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-08 14:21:58.777218 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-08 14:21:58.777229 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-11-08 14:21:58.777891 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-11-08 14:21:58.781446 | orchestrator | 2025-11-08 14:21:58.781547 | orchestrator | # OpenStack endpoints 2025-11-08 14:21:58.781565 | orchestrator | 2025-11-08 14:21:58.781577 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-08 14:21:58.781589 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-08 14:21:58.781601 | orchestrator | + export OS_CLOUD=admin 2025-11-08 14:21:58.781612 | orchestrator | + OS_CLOUD=admin 2025-11-08 14:21:58.781623 | orchestrator | + echo 2025-11-08 14:21:58.781634 | orchestrator | + echo '# OpenStack endpoints' 2025-11-08 14:21:58.781646 | orchestrator | + echo 2025-11-08 14:21:58.781657 | orchestrator | + openstack endpoint list 2025-11-08 14:22:02.787312 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-11-08 14:22:02.787441 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-11-08 14:22:02.787458 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-11-08 14:22:02.787469 | orchestrator | | 0d737ec57a7f427d81df70218426dd36 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-11-08 14:22:02.787480 | orchestrator | | 1f1b3c13394542e9a7e5e6f817766c1d | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-11-08 14:22:02.787491 | orchestrator | | 24c96975315548268478b439049db82e | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-11-08 14:22:02.787527 | orchestrator | | 288816353a1d429b90ca21b67c7d7d56 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-11-08 14:22:02.787539 | orchestrator | | 2bc6bae4f78c47cf93d05f89e4a466aa | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-11-08 14:22:02.787550 | orchestrator | | 2fd4c6c00fda4b01a0d6346ed6cc4d27 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-11-08 14:22:02.787561 | orchestrator | | 3fe747f7b6f2477ca0c0098676b3c694 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-11-08 14:22:02.787572 | orchestrator | | 4224e3b39aa341fd979984e28cd264fb | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-11-08 14:22:02.787582 | orchestrator | | 4deaa16804da41fbb5026d5bd2a2e9c4 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-11-08 14:22:02.787593 | orchestrator | | 4e6863a17d5643359478a3d34985192d | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-11-08 14:22:02.787605 | orchestrator | | 5f1dfe57dac44941bf9a46090b7be068 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-11-08 14:22:02.787640 | orchestrator | | 6e23676e26b64c3c94d455cb7e664d4c | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-11-08 14:22:02.787651 | orchestrator | | 7ad98121112e474a8176d58fd6a44b0a | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-11-08 14:22:02.787662 | orchestrator | | 7c8811f71db8448dafcaa40c5960fe38 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-11-08 14:22:02.787673 | orchestrator | | a035b66e7b1f4a3e828ec6c5b6e518c3 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-11-08 14:22:02.787684 | orchestrator | | a115ece85b6644858a0af368b8039d3b | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-11-08 14:22:02.787695 | orchestrator | | ade2cf2505c24721970dadf4360c8b07 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-11-08 14:22:02.787706 | orchestrator | | cd2d8cd636ad4d2493ac5e7ff65e107f | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-11-08 14:22:02.787717 | orchestrator | | d0f1c0d60b0748f59a3491f17bb97ed0 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-11-08 14:22:02.787728 | orchestrator | | df13ff1ee97147c084f8e8549251c798 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-11-08 14:22:02.787757 | orchestrator | | f3229e5412b34dd0873de0d3526e861b | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-11-08 14:22:02.787776 | orchestrator | | f33543c9d43d4fb184ee804ec0cd1e75 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-11-08 14:22:02.787787 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-11-08 14:22:03.168460 | orchestrator | 2025-11-08 14:22:03.168561 | orchestrator | # Cinder 2025-11-08 14:22:03.168571 | orchestrator | 2025-11-08 14:22:03.168579 | orchestrator | + echo 2025-11-08 14:22:03.168586 | orchestrator | + echo '# Cinder' 2025-11-08 14:22:03.168593 | orchestrator | + echo 2025-11-08 14:22:03.168601 | orchestrator | + openstack volume service list 2025-11-08 14:22:06.656212 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-11-08 14:22:06.656382 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-11-08 14:22:06.656411 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-11-08 14:22:06.656429 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-11-08T14:22:00.000000 | 2025-11-08 14:22:06.656447 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-11-08T14:22:00.000000 | 2025-11-08 14:22:06.656466 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-11-08T14:22:00.000000 | 2025-11-08 14:22:06.656486 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-11-08T14:22:00.000000 | 2025-11-08 14:22:06.656505 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-11-08T14:22:01.000000 | 2025-11-08 14:22:06.656524 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-11-08T14:22:01.000000 | 2025-11-08 14:22:06.656589 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-11-08T14:21:59.000000 | 2025-11-08 14:22:06.656611 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-11-08T14:22:00.000000 | 2025-11-08 14:22:06.656630 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-11-08T14:22:01.000000 | 2025-11-08 14:22:06.656644 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-11-08 14:22:06.984347 | orchestrator | 2025-11-08 14:22:06.984442 | orchestrator | # Neutron 2025-11-08 14:22:06.984457 | orchestrator | 2025-11-08 14:22:06.984467 | orchestrator | + echo 2025-11-08 14:22:06.984479 | orchestrator | + echo '# Neutron' 2025-11-08 14:22:06.984491 | orchestrator | + echo 2025-11-08 14:22:06.984501 | orchestrator | + openstack network agent list 2025-11-08 14:22:09.954514 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-11-08 14:22:09.954635 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-11-08 14:22:09.954649 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-11-08 14:22:09.954662 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-11-08 14:22:09.954674 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-11-08 14:22:09.954685 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-11-08 14:22:09.954696 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-11-08 14:22:09.954707 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-11-08 14:22:09.954718 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-11-08 14:22:09.954729 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-11-08 14:22:09.954740 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-11-08 14:22:09.954750 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-11-08 14:22:09.954761 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-11-08 14:22:10.526327 | orchestrator | + openstack network service provider list 2025-11-08 14:22:13.490875 | orchestrator | +---------------+------+---------+ 2025-11-08 14:22:13.491040 | orchestrator | | Service Type | Name | Default | 2025-11-08 14:22:13.491060 | orchestrator | +---------------+------+---------+ 2025-11-08 14:22:13.491074 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-11-08 14:22:13.491093 | orchestrator | +---------------+------+---------+ 2025-11-08 14:22:13.868931 | orchestrator | 2025-11-08 14:22:13.869065 | orchestrator | # Nova 2025-11-08 14:22:13.869083 | orchestrator | 2025-11-08 14:22:13.869095 | orchestrator | + echo 2025-11-08 14:22:13.869121 | orchestrator | + echo '# Nova' 2025-11-08 14:22:13.869134 | orchestrator | + echo 2025-11-08 14:22:13.869146 | orchestrator | + openstack compute service list 2025-11-08 14:22:17.762381 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-11-08 14:22:17.762483 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-11-08 14:22:17.762521 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-11-08 14:22:17.762534 | orchestrator | | 904400c8-dc36-4a62-9339-b1c6dd24fb81 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-11-08T14:22:16.000000 | 2025-11-08 14:22:17.762544 | orchestrator | | 50d65914-2eea-47c4-8f8e-eecfae5d0b12 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-11-08T14:22:14.000000 | 2025-11-08 14:22:17.762554 | orchestrator | | 3bdb68ab-41b0-4c76-b6b0-748212207c7a | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-11-08T14:22:16.000000 | 2025-11-08 14:22:17.762563 | orchestrator | | 52f3b178-21c4-42bb-b1ce-14f66bda97c2 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-11-08T14:22:11.000000 | 2025-11-08 14:22:17.762573 | orchestrator | | 2eba2ecc-1d7a-4e4e-9683-432503e23bd5 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-11-08T14:22:12.000000 | 2025-11-08 14:22:17.762582 | orchestrator | | 51367ee5-af93-46d0-9bc8-6421c92d052e | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-11-08T14:22:13.000000 | 2025-11-08 14:22:17.762592 | orchestrator | | 01327372-2297-4ec7-ba71-80484641751a | nova-compute | testbed-node-4 | nova | enabled | up | 2025-11-08T14:22:16.000000 | 2025-11-08 14:22:17.762602 | orchestrator | | 12c1e98e-5149-46dd-94b5-3efd591742fd | nova-compute | testbed-node-5 | nova | enabled | up | 2025-11-08T14:22:16.000000 | 2025-11-08 14:22:17.762611 | orchestrator | | 9d65b018-abcc-4306-b1a6-08e2f46889f4 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-11-08T14:22:16.000000 | 2025-11-08 14:22:17.762621 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-11-08 14:22:18.257913 | orchestrator | + openstack hypervisor list 2025-11-08 14:22:21.358080 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-11-08 14:22:21.358184 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-11-08 14:22:21.358196 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-11-08 14:22:21.358204 | orchestrator | | 5264363e-6b70-4deb-85ea-afae00529d72 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-11-08 14:22:21.358212 | orchestrator | | 7fad4c6a-d5b1-4a2f-9693-0d6e28f92edf | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-11-08 14:22:21.358220 | orchestrator | | 53724522-1233-4978-bc30-62d32f243b42 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-11-08 14:22:21.358227 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-11-08 14:22:21.981163 | orchestrator | 2025-11-08 14:22:21.981270 | orchestrator | # Run OpenStack test play 2025-11-08 14:22:21.981291 | orchestrator | 2025-11-08 14:22:21.981304 | orchestrator | + echo 2025-11-08 14:22:21.981316 | orchestrator | + echo '# Run OpenStack test play' 2025-11-08 14:22:21.981329 | orchestrator | + echo 2025-11-08 14:22:21.981340 | orchestrator | + osism apply --environment openstack test 2025-11-08 14:22:24.624374 | orchestrator | 2025-11-08 14:22:24 | INFO  | Trying to run play test in environment openstack 2025-11-08 14:22:34.752668 | orchestrator | 2025-11-08 14:22:34 | INFO  | Task 9a1d3552-1e07-416c-9bd4-64c1c1748cbc (test) was prepared for execution. 2025-11-08 14:22:34.752810 | orchestrator | 2025-11-08 14:22:34 | INFO  | It takes a moment until task 9a1d3552-1e07-416c-9bd4-64c1c1748cbc (test) has been started and output is visible here. 2025-11-08 14:29:57.073820 | orchestrator | 2025-11-08 14:29:57.073954 | orchestrator | PLAY [Create test project] ***************************************************** 2025-11-08 14:29:57.074049 | orchestrator | 2025-11-08 14:29:57.074062 | orchestrator | TASK [Create test domain] ****************************************************** 2025-11-08 14:29:57.074073 | orchestrator | Saturday 08 November 2025 14:22:39 +0000 (0:00:00.082) 0:00:00.082 ***** 2025-11-08 14:29:57.074114 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074130 | orchestrator | 2025-11-08 14:29:57.074141 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-11-08 14:29:57.074150 | orchestrator | Saturday 08 November 2025 14:22:43 +0000 (0:00:04.168) 0:00:04.251 ***** 2025-11-08 14:29:57.074160 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074170 | orchestrator | 2025-11-08 14:29:57.074179 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-11-08 14:29:57.074190 | orchestrator | Saturday 08 November 2025 14:22:48 +0000 (0:00:04.299) 0:00:08.551 ***** 2025-11-08 14:29:57.074199 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074209 | orchestrator | 2025-11-08 14:29:57.074219 | orchestrator | TASK [Create test project] ***************************************************** 2025-11-08 14:29:57.074228 | orchestrator | Saturday 08 November 2025 14:22:55 +0000 (0:00:07.075) 0:00:15.626 ***** 2025-11-08 14:29:57.074238 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074247 | orchestrator | 2025-11-08 14:29:57.074257 | orchestrator | TASK [Create test user] ******************************************************** 2025-11-08 14:29:57.074266 | orchestrator | Saturday 08 November 2025 14:22:59 +0000 (0:00:04.551) 0:00:20.178 ***** 2025-11-08 14:29:57.074276 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074285 | orchestrator | 2025-11-08 14:29:57.074307 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-11-08 14:29:57.074318 | orchestrator | Saturday 08 November 2025 14:23:04 +0000 (0:00:04.637) 0:00:24.815 ***** 2025-11-08 14:29:57.074342 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-11-08 14:29:57.074363 | orchestrator | changed: [localhost] => (item=member) 2025-11-08 14:29:57.074376 | orchestrator | changed: [localhost] => (item=creator) 2025-11-08 14:29:57.074387 | orchestrator | 2025-11-08 14:29:57.074399 | orchestrator | TASK [Create test server group] ************************************************ 2025-11-08 14:29:57.074410 | orchestrator | Saturday 08 November 2025 14:23:18 +0000 (0:00:13.688) 0:00:38.503 ***** 2025-11-08 14:29:57.074420 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074431 | orchestrator | 2025-11-08 14:29:57.074442 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-11-08 14:29:57.074453 | orchestrator | Saturday 08 November 2025 14:23:23 +0000 (0:00:05.063) 0:00:43.566 ***** 2025-11-08 14:29:57.074465 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074476 | orchestrator | 2025-11-08 14:29:57.074487 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-11-08 14:29:57.074497 | orchestrator | Saturday 08 November 2025 14:23:28 +0000 (0:00:05.212) 0:00:48.779 ***** 2025-11-08 14:29:57.074506 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074516 | orchestrator | 2025-11-08 14:29:57.074525 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-11-08 14:29:57.074535 | orchestrator | Saturday 08 November 2025 14:23:32 +0000 (0:00:04.504) 0:00:53.283 ***** 2025-11-08 14:29:57.074544 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074554 | orchestrator | 2025-11-08 14:29:57.074563 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-11-08 14:29:57.074573 | orchestrator | Saturday 08 November 2025 14:23:37 +0000 (0:00:04.820) 0:00:58.104 ***** 2025-11-08 14:29:57.074582 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074592 | orchestrator | 2025-11-08 14:29:57.074601 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-11-08 14:29:57.074611 | orchestrator | Saturday 08 November 2025 14:23:42 +0000 (0:00:04.609) 0:01:02.714 ***** 2025-11-08 14:29:57.074620 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074630 | orchestrator | 2025-11-08 14:29:57.074639 | orchestrator | TASK [Create test network topology] ******************************************** 2025-11-08 14:29:57.074648 | orchestrator | Saturday 08 November 2025 14:23:46 +0000 (0:00:04.405) 0:01:07.119 ***** 2025-11-08 14:29:57.074658 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.074669 | orchestrator | 2025-11-08 14:29:57.074678 | orchestrator | TASK [Create test instances] *************************************************** 2025-11-08 14:29:57.074696 | orchestrator | Saturday 08 November 2025 14:24:03 +0000 (0:00:17.278) 0:01:24.398 ***** 2025-11-08 14:29:57.074705 | orchestrator | changed: [localhost] => (item=test) 2025-11-08 14:29:57.074715 | orchestrator | changed: [localhost] => (item=test-1) 2025-11-08 14:29:57.074725 | orchestrator | 2025-11-08 14:29:57.074734 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-08 14:29:57.074744 | orchestrator | 2025-11-08 14:29:57.074753 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-08 14:29:57.074763 | orchestrator | changed: [localhost] => (item=test-2) 2025-11-08 14:29:57.074772 | orchestrator | 2025-11-08 14:29:57.074782 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-08 14:29:57.074791 | orchestrator | 2025-11-08 14:29:57.074801 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-08 14:29:57.074810 | orchestrator | changed: [localhost] => (item=test-3) 2025-11-08 14:29:57.074820 | orchestrator | 2025-11-08 14:29:57.074829 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-08 14:29:57.074839 | orchestrator | changed: [localhost] => (item=test-4) 2025-11-08 14:29:57.074848 | orchestrator | 2025-11-08 14:29:57.074858 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-11-08 14:29:57.074871 | orchestrator | Saturday 08 November 2025 14:28:26 +0000 (0:04:22.709) 0:05:47.107 ***** 2025-11-08 14:29:57.074881 | orchestrator | changed: [localhost] => (item=test) 2025-11-08 14:29:57.074891 | orchestrator | changed: [localhost] => (item=test-1) 2025-11-08 14:29:57.074901 | orchestrator | changed: [localhost] => (item=test-2) 2025-11-08 14:29:57.074910 | orchestrator | changed: [localhost] => (item=test-3) 2025-11-08 14:29:57.074919 | orchestrator | changed: [localhost] => (item=test-4) 2025-11-08 14:29:57.074929 | orchestrator | 2025-11-08 14:29:57.074939 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-11-08 14:29:57.074982 | orchestrator | Saturday 08 November 2025 14:28:52 +0000 (0:00:25.376) 0:06:12.484 ***** 2025-11-08 14:29:57.074994 | orchestrator | changed: [localhost] => (item=test) 2025-11-08 14:29:57.075003 | orchestrator | changed: [localhost] => (item=test-1) 2025-11-08 14:29:57.075013 | orchestrator | changed: [localhost] => (item=test-2) 2025-11-08 14:29:57.075022 | orchestrator | changed: [localhost] => (item=test-3) 2025-11-08 14:29:57.075032 | orchestrator | changed: [localhost] => (item=test-4) 2025-11-08 14:29:57.075041 | orchestrator | 2025-11-08 14:29:57.075051 | orchestrator | TASK [Create test volume] ****************************************************** 2025-11-08 14:29:57.075060 | orchestrator | Saturday 08 November 2025 14:29:30 +0000 (0:00:38.146) 0:06:50.630 ***** 2025-11-08 14:29:57.075070 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.075079 | orchestrator | 2025-11-08 14:29:57.075089 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-11-08 14:29:57.075098 | orchestrator | Saturday 08 November 2025 14:29:37 +0000 (0:00:06.899) 0:06:57.529 ***** 2025-11-08 14:29:57.075107 | orchestrator | changed: [localhost] 2025-11-08 14:29:57.075117 | orchestrator | 2025-11-08 14:29:57.075126 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-11-08 14:29:57.075136 | orchestrator | Saturday 08 November 2025 14:29:51 +0000 (0:00:14.122) 0:07:11.652 ***** 2025-11-08 14:29:57.075146 | orchestrator | ok: [localhost] 2025-11-08 14:29:57.075156 | orchestrator | 2025-11-08 14:29:57.075166 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-11-08 14:29:57.075176 | orchestrator | Saturday 08 November 2025 14:29:56 +0000 (0:00:05.435) 0:07:17.088 ***** 2025-11-08 14:29:57.075185 | orchestrator | ok: [localhost] => { 2025-11-08 14:29:57.075195 | orchestrator |  "msg": "192.168.112.176" 2025-11-08 14:29:57.075210 | orchestrator | } 2025-11-08 14:29:57.075227 | orchestrator | 2025-11-08 14:29:57.075244 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-08 14:29:57.075260 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-08 14:29:57.075289 | orchestrator | 2025-11-08 14:29:57.075305 | orchestrator | 2025-11-08 14:29:57.075322 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-08 14:29:57.075338 | orchestrator | Saturday 08 November 2025 14:29:56 +0000 (0:00:00.039) 0:07:17.128 ***** 2025-11-08 14:29:57.075351 | orchestrator | =============================================================================== 2025-11-08 14:29:57.075361 | orchestrator | Create test instances ------------------------------------------------- 262.71s 2025-11-08 14:29:57.075371 | orchestrator | Add tag to instances --------------------------------------------------- 38.15s 2025-11-08 14:29:57.075380 | orchestrator | Add metadata to instances ---------------------------------------------- 25.38s 2025-11-08 14:29:57.075390 | orchestrator | Create test network topology ------------------------------------------- 17.28s 2025-11-08 14:29:57.075400 | orchestrator | Attach test volume ----------------------------------------------------- 14.12s 2025-11-08 14:29:57.075409 | orchestrator | Add member roles to user test ------------------------------------------ 13.69s 2025-11-08 14:29:57.075419 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.08s 2025-11-08 14:29:57.075428 | orchestrator | Create test volume ------------------------------------------------------ 6.90s 2025-11-08 14:29:57.075437 | orchestrator | Create floating ip address ---------------------------------------------- 5.44s 2025-11-08 14:29:57.075447 | orchestrator | Create ssh security group ----------------------------------------------- 5.21s 2025-11-08 14:29:57.075456 | orchestrator | Create test server group ------------------------------------------------ 5.06s 2025-11-08 14:29:57.075465 | orchestrator | Create icmp security group ---------------------------------------------- 4.82s 2025-11-08 14:29:57.075475 | orchestrator | Create test user -------------------------------------------------------- 4.64s 2025-11-08 14:29:57.075484 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.61s 2025-11-08 14:29:57.075497 | orchestrator | Create test project ----------------------------------------------------- 4.55s 2025-11-08 14:29:57.075513 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.50s 2025-11-08 14:29:57.075527 | orchestrator | Create test keypair ----------------------------------------------------- 4.41s 2025-11-08 14:29:57.075542 | orchestrator | Create test-admin user -------------------------------------------------- 4.30s 2025-11-08 14:29:57.075559 | orchestrator | Create test domain ------------------------------------------------------ 4.17s 2025-11-08 14:29:57.075575 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-11-08 14:29:57.457346 | orchestrator | + server_list 2025-11-08 14:29:57.457509 | orchestrator | + openstack --os-cloud test server list 2025-11-08 14:30:01.219718 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-11-08 14:30:01.219838 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-11-08 14:30:01.219851 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-11-08 14:30:01.219860 | orchestrator | | 16dc5413-b5f0-4a4a-8e28-5413dd19f227 | test-4 | ACTIVE | auto_allocated_network=10.42.0.59, 192.168.112.127 | N/A (booted from volume) | SCS-1L-1 | 2025-11-08 14:30:01.219869 | orchestrator | | 4dc46984-f8a3-4c18-82d8-3459bc63dc46 | test-3 | ACTIVE | auto_allocated_network=10.42.0.33, 192.168.112.123 | N/A (booted from volume) | SCS-1L-1 | 2025-11-08 14:30:01.219878 | orchestrator | | b363afce-f5c0-4b92-9270-47eb9bb2b0fb | test-2 | ACTIVE | auto_allocated_network=10.42.0.49, 192.168.112.103 | N/A (booted from volume) | SCS-1L-1 | 2025-11-08 14:30:01.219889 | orchestrator | | b678ff19-a77c-4553-8ef3-1c5a13cef8bf | test-1 | ACTIVE | auto_allocated_network=10.42.0.34, 192.168.112.151 | N/A (booted from volume) | SCS-1L-1 | 2025-11-08 14:30:01.219897 | orchestrator | | 0527ae6f-4c43-4e2c-8b16-61c45425b95d | test | ACTIVE | auto_allocated_network=10.42.0.25, 192.168.112.176 | N/A (booted from volume) | SCS-1L-1 | 2025-11-08 14:30:01.219927 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-11-08 14:30:01.621075 | orchestrator | + openstack --os-cloud test server show test 2025-11-08 14:30:05.257197 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:05.257327 | orchestrator | | Field | Value | 2025-11-08 14:30:05.257344 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:05.257357 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-08 14:30:05.257368 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-08 14:30:05.257379 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-08 14:30:05.257390 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-11-08 14:30:05.257402 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-08 14:30:05.257413 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-08 14:30:05.257460 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-08 14:30:05.257472 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-08 14:30:05.257488 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-08 14:30:05.257499 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-08 14:30:05.257510 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-08 14:30:05.257521 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-08 14:30:05.257532 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-08 14:30:05.257543 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-08 14:30:05.257554 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-08 14:30:05.257572 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-08T14:24:48.000000 | 2025-11-08 14:30:05.257590 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-08 14:30:05.257602 | orchestrator | | accessIPv4 | | 2025-11-08 14:30:05.257618 | orchestrator | | accessIPv6 | | 2025-11-08 14:30:05.257629 | orchestrator | | addresses | auto_allocated_network=10.42.0.25, 192.168.112.176 | 2025-11-08 14:30:05.257640 | orchestrator | | config_drive | | 2025-11-08 14:30:05.257652 | orchestrator | | created | 2025-11-08T14:24:12Z | 2025-11-08 14:30:05.257663 | orchestrator | | description | None | 2025-11-08 14:30:05.257676 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-08 14:30:05.257694 | orchestrator | | hostId | 1caf624b4537bd91c61432c410d07ff6da6b808a19592ac54b3c745a | 2025-11-08 14:30:05.257707 | orchestrator | | host_status | None | 2025-11-08 14:30:05.257727 | orchestrator | | id | 0527ae6f-4c43-4e2c-8b16-61c45425b95d | 2025-11-08 14:30:05.257741 | orchestrator | | image | N/A (booted from volume) | 2025-11-08 14:30:05.257758 | orchestrator | | key_name | test | 2025-11-08 14:30:05.257771 | orchestrator | | locked | False | 2025-11-08 14:30:05.257785 | orchestrator | | locked_reason | None | 2025-11-08 14:30:05.257797 | orchestrator | | name | test | 2025-11-08 14:30:05.257810 | orchestrator | | pinned_availability_zone | None | 2025-11-08 14:30:05.257823 | orchestrator | | progress | 0 | 2025-11-08 14:30:05.257843 | orchestrator | | project_id | 60456cc0f1094df492300dc109913463 | 2025-11-08 14:30:05.257856 | orchestrator | | properties | hostname='test' | 2025-11-08 14:30:05.257876 | orchestrator | | security_groups | name='ssh' | 2025-11-08 14:30:05.257891 | orchestrator | | | name='icmp' | 2025-11-08 14:30:05.257902 | orchestrator | | server_groups | None | 2025-11-08 14:30:05.257913 | orchestrator | | status | ACTIVE | 2025-11-08 14:30:05.257925 | orchestrator | | tags | test | 2025-11-08 14:30:05.257936 | orchestrator | | trusted_image_certificates | None | 2025-11-08 14:30:05.257954 | orchestrator | | updated | 2025-11-08T14:28:31Z | 2025-11-08 14:30:05.258014 | orchestrator | | user_id | 5616eb3ffc1e46278aa031673eb5288f | 2025-11-08 14:30:05.258084 | orchestrator | | volumes_attached | delete_on_termination='True', id='e09689ed-b930-4f32-863b-29a6ba2d4ef9' | 2025-11-08 14:30:05.258096 | orchestrator | | | delete_on_termination='False', id='90d68808-1f3d-41d1-b516-20e6046bff15' | 2025-11-08 14:30:05.264338 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:05.674210 | orchestrator | + openstack --os-cloud test server show test-1 2025-11-08 14:30:09.164017 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:09.164124 | orchestrator | | Field | Value | 2025-11-08 14:30:09.164139 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:09.164150 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-08 14:30:09.164160 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-08 14:30:09.164187 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-08 14:30:09.164197 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-11-08 14:30:09.164207 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-08 14:30:09.164217 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-08 14:30:09.164246 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-08 14:30:09.164261 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-08 14:30:09.164272 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-08 14:30:09.164281 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-08 14:30:09.164291 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-08 14:30:09.164307 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-08 14:30:09.164318 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-08 14:30:09.164327 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-08 14:30:09.164337 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-08 14:30:09.164347 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-08T14:25:47.000000 | 2025-11-08 14:30:09.164364 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-08 14:30:09.164378 | orchestrator | | accessIPv4 | | 2025-11-08 14:30:09.164388 | orchestrator | | accessIPv6 | | 2025-11-08 14:30:09.164398 | orchestrator | | addresses | auto_allocated_network=10.42.0.34, 192.168.112.151 | 2025-11-08 14:30:09.164414 | orchestrator | | config_drive | | 2025-11-08 14:30:09.164424 | orchestrator | | created | 2025-11-08T14:25:13Z | 2025-11-08 14:30:09.164435 | orchestrator | | description | None | 2025-11-08 14:30:09.164445 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-08 14:30:09.164455 | orchestrator | | hostId | 31f2b606f1433fc2d0c3287346fccdef025da50fd163c338668cf72d | 2025-11-08 14:30:09.164465 | orchestrator | | host_status | None | 2025-11-08 14:30:09.164482 | orchestrator | | id | b678ff19-a77c-4553-8ef3-1c5a13cef8bf | 2025-11-08 14:30:09.164496 | orchestrator | | image | N/A (booted from volume) | 2025-11-08 14:30:09.164510 | orchestrator | | key_name | test | 2025-11-08 14:30:09.164521 | orchestrator | | locked | False | 2025-11-08 14:30:09.164539 | orchestrator | | locked_reason | None | 2025-11-08 14:30:09.164551 | orchestrator | | name | test-1 | 2025-11-08 14:30:09.164563 | orchestrator | | pinned_availability_zone | None | 2025-11-08 14:30:09.164574 | orchestrator | | progress | 0 | 2025-11-08 14:30:09.164586 | orchestrator | | project_id | 60456cc0f1094df492300dc109913463 | 2025-11-08 14:30:09.164597 | orchestrator | | properties | hostname='test-1' | 2025-11-08 14:30:09.164616 | orchestrator | | security_groups | name='ssh' | 2025-11-08 14:30:09.164628 | orchestrator | | | name='icmp' | 2025-11-08 14:30:09.164639 | orchestrator | | server_groups | None | 2025-11-08 14:30:09.164665 | orchestrator | | status | ACTIVE | 2025-11-08 14:30:09.164677 | orchestrator | | tags | test | 2025-11-08 14:30:09.164689 | orchestrator | | trusted_image_certificates | None | 2025-11-08 14:30:09.164701 | orchestrator | | updated | 2025-11-08T14:28:37Z | 2025-11-08 14:30:09.164712 | orchestrator | | user_id | 5616eb3ffc1e46278aa031673eb5288f | 2025-11-08 14:30:09.164723 | orchestrator | | volumes_attached | delete_on_termination='True', id='952dc9d5-8fd5-4a59-a798-0c2218c75f24' | 2025-11-08 14:30:09.167803 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:09.531097 | orchestrator | + openstack --os-cloud test server show test-2 2025-11-08 14:30:13.126179 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:13.126270 | orchestrator | | Field | Value | 2025-11-08 14:30:13.126294 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:13.126300 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-08 14:30:13.126306 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-08 14:30:13.126311 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-08 14:30:13.126316 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-11-08 14:30:13.126321 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-08 14:30:13.126327 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-08 14:30:13.126344 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-08 14:30:13.126350 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-08 14:30:13.126363 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-08 14:30:13.126368 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-08 14:30:13.126374 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-08 14:30:13.126379 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-08 14:30:13.126384 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-08 14:30:13.126389 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-08 14:30:13.126395 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-08 14:30:13.126400 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-08T14:26:43.000000 | 2025-11-08 14:30:13.126410 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-08 14:30:13.126423 | orchestrator | | accessIPv4 | | 2025-11-08 14:30:13.126428 | orchestrator | | accessIPv6 | | 2025-11-08 14:30:13.126434 | orchestrator | | addresses | auto_allocated_network=10.42.0.49, 192.168.112.103 | 2025-11-08 14:30:13.126439 | orchestrator | | config_drive | | 2025-11-08 14:30:13.126444 | orchestrator | | created | 2025-11-08T14:26:09Z | 2025-11-08 14:30:13.126449 | orchestrator | | description | None | 2025-11-08 14:30:13.126455 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-08 14:30:13.126460 | orchestrator | | hostId | 972e915c44bcb4b8b0940c4bbc5ee0d77dc937f15372b5b4f3bc4b7d | 2025-11-08 14:30:13.126466 | orchestrator | | host_status | None | 2025-11-08 14:30:13.126475 | orchestrator | | id | b363afce-f5c0-4b92-9270-47eb9bb2b0fb | 2025-11-08 14:30:13.126493 | orchestrator | | image | N/A (booted from volume) | 2025-11-08 14:30:13.126499 | orchestrator | | key_name | test | 2025-11-08 14:30:13.126504 | orchestrator | | locked | False | 2025-11-08 14:30:13.126509 | orchestrator | | locked_reason | None | 2025-11-08 14:30:13.126514 | orchestrator | | name | test-2 | 2025-11-08 14:30:13.126520 | orchestrator | | pinned_availability_zone | None | 2025-11-08 14:30:13.126525 | orchestrator | | progress | 0 | 2025-11-08 14:30:13.126530 | orchestrator | | project_id | 60456cc0f1094df492300dc109913463 | 2025-11-08 14:30:13.126535 | orchestrator | | properties | hostname='test-2' | 2025-11-08 14:30:13.126549 | orchestrator | | security_groups | name='ssh' | 2025-11-08 14:30:13.126557 | orchestrator | | | name='icmp' | 2025-11-08 14:30:13.126563 | orchestrator | | server_groups | None | 2025-11-08 14:30:13.126568 | orchestrator | | status | ACTIVE | 2025-11-08 14:30:13.126573 | orchestrator | | tags | test | 2025-11-08 14:30:13.126578 | orchestrator | | trusted_image_certificates | None | 2025-11-08 14:30:13.126584 | orchestrator | | updated | 2025-11-08T14:28:42Z | 2025-11-08 14:30:13.126589 | orchestrator | | user_id | 5616eb3ffc1e46278aa031673eb5288f | 2025-11-08 14:30:13.126612 | orchestrator | | volumes_attached | delete_on_termination='True', id='215ce77a-4d28-46d9-9316-ac5bd5857b5b' | 2025-11-08 14:30:13.129327 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:13.517136 | orchestrator | + openstack --os-cloud test server show test-3 2025-11-08 14:30:17.015562 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:17.015643 | orchestrator | | Field | Value | 2025-11-08 14:30:17.015650 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:17.015654 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-08 14:30:17.015658 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-08 14:30:17.015662 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-08 14:30:17.015666 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-11-08 14:30:17.015670 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-08 14:30:17.015690 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-08 14:30:17.015705 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-08 14:30:17.015709 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-08 14:30:17.015713 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-08 14:30:17.015722 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-08 14:30:17.015726 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-08 14:30:17.015730 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-08 14:30:17.015734 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-08 14:30:17.015738 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-08 14:30:17.015742 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-08 14:30:17.015750 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-08T14:27:29.000000 | 2025-11-08 14:30:17.015757 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-08 14:30:17.015761 | orchestrator | | accessIPv4 | | 2025-11-08 14:30:17.015767 | orchestrator | | accessIPv6 | | 2025-11-08 14:30:17.015771 | orchestrator | | addresses | auto_allocated_network=10.42.0.33, 192.168.112.123 | 2025-11-08 14:30:17.015775 | orchestrator | | config_drive | | 2025-11-08 14:30:17.015779 | orchestrator | | created | 2025-11-08T14:27:04Z | 2025-11-08 14:30:17.015783 | orchestrator | | description | None | 2025-11-08 14:30:17.015787 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-08 14:30:17.015795 | orchestrator | | hostId | 1caf624b4537bd91c61432c410d07ff6da6b808a19592ac54b3c745a | 2025-11-08 14:30:17.015799 | orchestrator | | host_status | None | 2025-11-08 14:30:17.015807 | orchestrator | | id | 4dc46984-f8a3-4c18-82d8-3459bc63dc46 | 2025-11-08 14:30:17.015811 | orchestrator | | image | N/A (booted from volume) | 2025-11-08 14:30:17.015817 | orchestrator | | key_name | test | 2025-11-08 14:30:17.015821 | orchestrator | | locked | False | 2025-11-08 14:30:17.015825 | orchestrator | | locked_reason | None | 2025-11-08 14:30:17.015829 | orchestrator | | name | test-3 | 2025-11-08 14:30:17.015832 | orchestrator | | pinned_availability_zone | None | 2025-11-08 14:30:17.015840 | orchestrator | | progress | 0 | 2025-11-08 14:30:17.015844 | orchestrator | | project_id | 60456cc0f1094df492300dc109913463 | 2025-11-08 14:30:17.015848 | orchestrator | | properties | hostname='test-3' | 2025-11-08 14:30:17.015855 | orchestrator | | security_groups | name='ssh' | 2025-11-08 14:30:17.015859 | orchestrator | | | name='icmp' | 2025-11-08 14:30:17.015865 | orchestrator | | server_groups | None | 2025-11-08 14:30:17.015869 | orchestrator | | status | ACTIVE | 2025-11-08 14:30:17.015873 | orchestrator | | tags | test | 2025-11-08 14:30:17.015877 | orchestrator | | trusted_image_certificates | None | 2025-11-08 14:30:17.015884 | orchestrator | | updated | 2025-11-08T14:28:46Z | 2025-11-08 14:30:17.015888 | orchestrator | | user_id | 5616eb3ffc1e46278aa031673eb5288f | 2025-11-08 14:30:17.015892 | orchestrator | | volumes_attached | delete_on_termination='True', id='421dc05d-6ee5-4e15-81ea-ba1be4a14568' | 2025-11-08 14:30:17.022780 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:17.610149 | orchestrator | + openstack --os-cloud test server show test-4 2025-11-08 14:30:21.023088 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:21.023212 | orchestrator | | Field | Value | 2025-11-08 14:30:21.023228 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:21.023238 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-08 14:30:21.023248 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-08 14:30:21.023278 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-08 14:30:21.023288 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-11-08 14:30:21.023297 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-08 14:30:21.023306 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-08 14:30:21.023330 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-08 14:30:21.023340 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-08 14:30:21.023355 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-08 14:30:21.023364 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-08 14:30:21.023373 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-08 14:30:21.023382 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-08 14:30:21.023400 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-08 14:30:21.023409 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-08 14:30:21.023418 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-08 14:30:21.023427 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-08T14:28:14.000000 | 2025-11-08 14:30:21.023442 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-08 14:30:21.023452 | orchestrator | | accessIPv4 | | 2025-11-08 14:30:21.023465 | orchestrator | | accessIPv6 | | 2025-11-08 14:30:21.023474 | orchestrator | | addresses | auto_allocated_network=10.42.0.59, 192.168.112.127 | 2025-11-08 14:30:21.023484 | orchestrator | | config_drive | | 2025-11-08 14:30:21.023498 | orchestrator | | created | 2025-11-08T14:27:48Z | 2025-11-08 14:30:21.023507 | orchestrator | | description | None | 2025-11-08 14:30:21.023516 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-08 14:30:21.023525 | orchestrator | | hostId | 31f2b606f1433fc2d0c3287346fccdef025da50fd163c338668cf72d | 2025-11-08 14:30:21.023534 | orchestrator | | host_status | None | 2025-11-08 14:30:21.023550 | orchestrator | | id | 16dc5413-b5f0-4a4a-8e28-5413dd19f227 | 2025-11-08 14:30:21.023559 | orchestrator | | image | N/A (booted from volume) | 2025-11-08 14:30:21.023568 | orchestrator | | key_name | test | 2025-11-08 14:30:21.024062 | orchestrator | | locked | False | 2025-11-08 14:30:21.024124 | orchestrator | | locked_reason | None | 2025-11-08 14:30:21.024139 | orchestrator | | name | test-4 | 2025-11-08 14:30:21.024149 | orchestrator | | pinned_availability_zone | None | 2025-11-08 14:30:21.024158 | orchestrator | | progress | 0 | 2025-11-08 14:30:21.024167 | orchestrator | | project_id | 60456cc0f1094df492300dc109913463 | 2025-11-08 14:30:21.024180 | orchestrator | | properties | hostname='test-4' | 2025-11-08 14:30:21.024202 | orchestrator | | security_groups | name='ssh' | 2025-11-08 14:30:21.024211 | orchestrator | | | name='icmp' | 2025-11-08 14:30:21.024220 | orchestrator | | server_groups | None | 2025-11-08 14:30:21.024236 | orchestrator | | status | ACTIVE | 2025-11-08 14:30:21.024245 | orchestrator | | tags | test | 2025-11-08 14:30:21.024254 | orchestrator | | trusted_image_certificates | None | 2025-11-08 14:30:21.024263 | orchestrator | | updated | 2025-11-08T14:28:51Z | 2025-11-08 14:30:21.024272 | orchestrator | | user_id | 5616eb3ffc1e46278aa031673eb5288f | 2025-11-08 14:30:21.024281 | orchestrator | | volumes_attached | delete_on_termination='True', id='e5349162-32c9-4c42-9663-2819534a7647' | 2025-11-08 14:30:21.024294 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-08 14:30:21.396291 | orchestrator | + server_ping 2025-11-08 14:30:21.397732 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-08 14:30:21.397767 | orchestrator | ++ tr -d '\r' 2025-11-08 14:30:24.730640 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:30:24.730751 | orchestrator | + ping -c3 192.168.112.123 2025-11-08 14:30:24.746450 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-11-08 14:30:24.746558 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=8.59 ms 2025-11-08 14:30:25.742120 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.15 ms 2025-11-08 14:30:26.742744 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.73 ms 2025-11-08 14:30:26.742833 | orchestrator | 2025-11-08 14:30:26.742843 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-11-08 14:30:26.742851 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:30:26.742878 | orchestrator | rtt min/avg/max/mdev = 1.726/4.154/8.590/3.141 ms 2025-11-08 14:30:26.743335 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:30:26.743350 | orchestrator | + ping -c3 192.168.112.151 2025-11-08 14:30:26.753421 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2025-11-08 14:30:26.753518 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=7.76 ms 2025-11-08 14:30:27.749852 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=2.44 ms 2025-11-08 14:30:28.751634 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=2.08 ms 2025-11-08 14:30:28.751853 | orchestrator | 2025-11-08 14:30:28.751872 | orchestrator | --- 192.168.112.151 ping statistics --- 2025-11-08 14:30:28.751887 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:30:28.751898 | orchestrator | rtt min/avg/max/mdev = 2.077/4.090/7.759/2.598 ms 2025-11-08 14:30:28.752726 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:30:28.752753 | orchestrator | + ping -c3 192.168.112.176 2025-11-08 14:30:28.763445 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-11-08 14:30:28.763549 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=7.99 ms 2025-11-08 14:30:29.759588 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=2.49 ms 2025-11-08 14:30:30.761263 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=2.08 ms 2025-11-08 14:30:30.761586 | orchestrator | 2025-11-08 14:30:30.761616 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-11-08 14:30:30.761628 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:30:30.761638 | orchestrator | rtt min/avg/max/mdev = 2.078/4.183/7.985/2.693 ms 2025-11-08 14:30:30.763046 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:30:30.763083 | orchestrator | + ping -c3 192.168.112.127 2025-11-08 14:30:30.773692 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-11-08 14:30:30.773796 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.26 ms 2025-11-08 14:30:31.770713 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.55 ms 2025-11-08 14:30:32.772378 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.89 ms 2025-11-08 14:30:32.772512 | orchestrator | 2025-11-08 14:30:32.772538 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-11-08 14:30:32.772560 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-08 14:30:32.772575 | orchestrator | rtt min/avg/max/mdev = 1.886/3.900/7.260/2.391 ms 2025-11-08 14:30:32.772588 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:30:32.772600 | orchestrator | + ping -c3 192.168.112.103 2025-11-08 14:30:32.784378 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-11-08 14:30:32.784490 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=6.12 ms 2025-11-08 14:30:33.783232 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.91 ms 2025-11-08 14:30:34.784120 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=2.36 ms 2025-11-08 14:30:34.784230 | orchestrator | 2025-11-08 14:30:34.784256 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-11-08 14:30:34.784287 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:30:34.784309 | orchestrator | rtt min/avg/max/mdev = 2.359/3.798/6.123/1.659 ms 2025-11-08 14:30:34.784751 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-08 14:30:34.784780 | orchestrator | + compute_list 2025-11-08 14:30:34.784792 | orchestrator | + osism manage compute list testbed-node-3 2025-11-08 14:30:38.360144 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:30:38.360225 | orchestrator | | ID | Name | Status | 2025-11-08 14:30:38.360232 | orchestrator | |--------------------------------------+--------+----------| 2025-11-08 14:30:38.360236 | orchestrator | | b363afce-f5c0-4b92-9270-47eb9bb2b0fb | test-2 | ACTIVE | 2025-11-08 14:30:38.360241 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:30:38.841660 | orchestrator | + osism manage compute list testbed-node-4 2025-11-08 14:30:42.970160 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:30:42.970266 | orchestrator | | ID | Name | Status | 2025-11-08 14:30:42.970288 | orchestrator | |--------------------------------------+--------+----------| 2025-11-08 14:30:42.970308 | orchestrator | | 4dc46984-f8a3-4c18-82d8-3459bc63dc46 | test-3 | ACTIVE | 2025-11-08 14:30:42.970327 | orchestrator | | 0527ae6f-4c43-4e2c-8b16-61c45425b95d | test | ACTIVE | 2025-11-08 14:30:42.970344 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:30:43.424574 | orchestrator | + osism manage compute list testbed-node-5 2025-11-08 14:30:47.082236 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:30:47.082400 | orchestrator | | ID | Name | Status | 2025-11-08 14:30:47.082419 | orchestrator | |--------------------------------------+--------+----------| 2025-11-08 14:30:47.082453 | orchestrator | | 16dc5413-b5f0-4a4a-8e28-5413dd19f227 | test-4 | ACTIVE | 2025-11-08 14:30:47.082465 | orchestrator | | b678ff19-a77c-4553-8ef3-1c5a13cef8bf | test-1 | ACTIVE | 2025-11-08 14:30:47.082476 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:30:47.393866 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-11-08 14:30:50.990570 | orchestrator | 2025-11-08 14:30:50 | INFO  | Live migrating server 4dc46984-f8a3-4c18-82d8-3459bc63dc46 2025-11-08 14:31:04.875630 | orchestrator | 2025-11-08 14:31:04 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:31:07.243147 | orchestrator | 2025-11-08 14:31:07 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:31:09.697667 | orchestrator | 2025-11-08 14:31:09 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:31:12.260097 | orchestrator | 2025-11-08 14:31:12 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:31:14.530736 | orchestrator | 2025-11-08 14:31:14 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:31:17.196413 | orchestrator | 2025-11-08 14:31:17 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:31:19.457461 | orchestrator | 2025-11-08 14:31:19 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:31:21.761999 | orchestrator | 2025-11-08 14:31:21 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:31:24.059104 | orchestrator | 2025-11-08 14:31:24 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:31:26.406815 | orchestrator | 2025-11-08 14:31:26 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) completed with status ACTIVE 2025-11-08 14:31:26.406917 | orchestrator | 2025-11-08 14:31:26 | INFO  | Live migrating server 0527ae6f-4c43-4e2c-8b16-61c45425b95d 2025-11-08 14:31:38.672417 | orchestrator | 2025-11-08 14:31:38 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:31:41.016175 | orchestrator | 2025-11-08 14:31:41 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:31:43.378309 | orchestrator | 2025-11-08 14:31:43 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:31:45.673767 | orchestrator | 2025-11-08 14:31:45 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:31:48.001484 | orchestrator | 2025-11-08 14:31:47 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:31:50.420719 | orchestrator | 2025-11-08 14:31:50 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:31:52.825089 | orchestrator | 2025-11-08 14:31:52 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:31:55.089540 | orchestrator | 2025-11-08 14:31:55 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:31:57.391829 | orchestrator | 2025-11-08 14:31:57 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:31:59.699829 | orchestrator | 2025-11-08 14:31:59 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:32:02.037431 | orchestrator | 2025-11-08 14:32:02 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) completed with status ACTIVE 2025-11-08 14:32:02.556750 | orchestrator | + compute_list 2025-11-08 14:32:02.556854 | orchestrator | + osism manage compute list testbed-node-3 2025-11-08 14:32:06.397289 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:32:06.397434 | orchestrator | | ID | Name | Status | 2025-11-08 14:32:06.397453 | orchestrator | |--------------------------------------+--------+----------| 2025-11-08 14:32:06.397465 | orchestrator | | 4dc46984-f8a3-4c18-82d8-3459bc63dc46 | test-3 | ACTIVE | 2025-11-08 14:32:06.397526 | orchestrator | | b363afce-f5c0-4b92-9270-47eb9bb2b0fb | test-2 | ACTIVE | 2025-11-08 14:32:06.397540 | orchestrator | | 0527ae6f-4c43-4e2c-8b16-61c45425b95d | test | ACTIVE | 2025-11-08 14:32:06.397552 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:32:06.860276 | orchestrator | + osism manage compute list testbed-node-4 2025-11-08 14:32:10.024126 | orchestrator | +------+--------+----------+ 2025-11-08 14:32:10.024361 | orchestrator | | ID | Name | Status | 2025-11-08 14:32:10.024408 | orchestrator | |------+--------+----------| 2025-11-08 14:32:10.024421 | orchestrator | +------+--------+----------+ 2025-11-08 14:32:10.574402 | orchestrator | + osism manage compute list testbed-node-5 2025-11-08 14:32:14.038821 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:32:14.039009 | orchestrator | | ID | Name | Status | 2025-11-08 14:32:14.039024 | orchestrator | |--------------------------------------+--------+----------| 2025-11-08 14:32:14.039034 | orchestrator | | 16dc5413-b5f0-4a4a-8e28-5413dd19f227 | test-4 | ACTIVE | 2025-11-08 14:32:14.039044 | orchestrator | | b678ff19-a77c-4553-8ef3-1c5a13cef8bf | test-1 | ACTIVE | 2025-11-08 14:32:14.039053 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:32:14.557071 | orchestrator | + server_ping 2025-11-08 14:32:14.557199 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-08 14:32:14.559123 | orchestrator | ++ tr -d '\r' 2025-11-08 14:32:17.796212 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:32:17.796353 | orchestrator | + ping -c3 192.168.112.123 2025-11-08 14:32:17.805799 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-11-08 14:32:17.805894 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=6.89 ms 2025-11-08 14:32:18.803535 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.65 ms 2025-11-08 14:32:19.805458 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=2.61 ms 2025-11-08 14:32:19.805581 | orchestrator | 2025-11-08 14:32:19.805594 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-11-08 14:32:19.805606 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-08 14:32:19.805615 | orchestrator | rtt min/avg/max/mdev = 2.613/4.051/6.892/2.008 ms 2025-11-08 14:32:19.806387 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:32:19.806409 | orchestrator | + ping -c3 192.168.112.151 2025-11-08 14:32:19.821647 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2025-11-08 14:32:19.821846 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=9.57 ms 2025-11-08 14:32:20.816422 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=2.49 ms 2025-11-08 14:32:21.818174 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=2.39 ms 2025-11-08 14:32:21.818282 | orchestrator | 2025-11-08 14:32:21.818298 | orchestrator | --- 192.168.112.151 ping statistics --- 2025-11-08 14:32:21.818310 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-08 14:32:21.818322 | orchestrator | rtt min/avg/max/mdev = 2.391/4.816/9.571/3.362 ms 2025-11-08 14:32:21.818878 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:32:21.818904 | orchestrator | + ping -c3 192.168.112.176 2025-11-08 14:32:21.835255 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-11-08 14:32:21.835384 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=10.8 ms 2025-11-08 14:32:22.828540 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=2.73 ms 2025-11-08 14:32:23.829177 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=2.48 ms 2025-11-08 14:32:23.829336 | orchestrator | 2025-11-08 14:32:23.829350 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-11-08 14:32:23.829358 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-11-08 14:32:23.829365 | orchestrator | rtt min/avg/max/mdev = 2.483/5.348/10.831/3.878 ms 2025-11-08 14:32:23.829483 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:32:23.829495 | orchestrator | + ping -c3 192.168.112.127 2025-11-08 14:32:23.839998 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-11-08 14:32:23.840097 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=6.50 ms 2025-11-08 14:32:24.837599 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.28 ms 2025-11-08 14:32:25.840042 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.23 ms 2025-11-08 14:32:25.840144 | orchestrator | 2025-11-08 14:32:25.840159 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-11-08 14:32:25.840172 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:32:25.840185 | orchestrator | rtt min/avg/max/mdev = 2.226/3.669/6.501/2.002 ms 2025-11-08 14:32:25.840197 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:32:25.840210 | orchestrator | + ping -c3 192.168.112.103 2025-11-08 14:32:25.853064 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-11-08 14:32:25.853177 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=6.29 ms 2025-11-08 14:32:26.851233 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.68 ms 2025-11-08 14:32:27.853195 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=2.13 ms 2025-11-08 14:32:27.853298 | orchestrator | 2025-11-08 14:32:27.853314 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-11-08 14:32:27.853327 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-08 14:32:27.853338 | orchestrator | rtt min/avg/max/mdev = 2.133/3.699/6.286/1.842 ms 2025-11-08 14:32:27.853350 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-11-08 14:32:31.542325 | orchestrator | 2025-11-08 14:32:31 | INFO  | Live migrating server 16dc5413-b5f0-4a4a-8e28-5413dd19f227 2025-11-08 14:32:42.552903 | orchestrator | 2025-11-08 14:32:42 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:32:44.901568 | orchestrator | 2025-11-08 14:32:44 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:32:47.275671 | orchestrator | 2025-11-08 14:32:47 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:32:49.551866 | orchestrator | 2025-11-08 14:32:49 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:32:51.816903 | orchestrator | 2025-11-08 14:32:51 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:32:54.123778 | orchestrator | 2025-11-08 14:32:54 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:32:56.401503 | orchestrator | 2025-11-08 14:32:56 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:32:58.654611 | orchestrator | 2025-11-08 14:32:58 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:33:00.981842 | orchestrator | 2025-11-08 14:33:00 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:33:03.270372 | orchestrator | 2025-11-08 14:33:03 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) completed with status ACTIVE 2025-11-08 14:33:03.271840 | orchestrator | 2025-11-08 14:33:03 | INFO  | Live migrating server b678ff19-a77c-4553-8ef3-1c5a13cef8bf 2025-11-08 14:33:14.586661 | orchestrator | 2025-11-08 14:33:14 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:33:16.904860 | orchestrator | 2025-11-08 14:33:16 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:33:19.310739 | orchestrator | 2025-11-08 14:33:19 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:33:21.606120 | orchestrator | 2025-11-08 14:33:21 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:33:23.861319 | orchestrator | 2025-11-08 14:33:23 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:33:26.128861 | orchestrator | 2025-11-08 14:33:26 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:33:28.399832 | orchestrator | 2025-11-08 14:33:28 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:33:30.648241 | orchestrator | 2025-11-08 14:33:30 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:33:32.952846 | orchestrator | 2025-11-08 14:33:32 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:33:35.323418 | orchestrator | 2025-11-08 14:33:35 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) completed with status ACTIVE 2025-11-08 14:33:35.846219 | orchestrator | + compute_list 2025-11-08 14:33:35.846339 | orchestrator | + osism manage compute list testbed-node-3 2025-11-08 14:33:39.473402 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:33:39.473512 | orchestrator | | ID | Name | Status | 2025-11-08 14:33:39.473527 | orchestrator | |--------------------------------------+--------+----------| 2025-11-08 14:33:39.473538 | orchestrator | | 16dc5413-b5f0-4a4a-8e28-5413dd19f227 | test-4 | ACTIVE | 2025-11-08 14:33:39.473549 | orchestrator | | 4dc46984-f8a3-4c18-82d8-3459bc63dc46 | test-3 | ACTIVE | 2025-11-08 14:33:39.473560 | orchestrator | | b363afce-f5c0-4b92-9270-47eb9bb2b0fb | test-2 | ACTIVE | 2025-11-08 14:33:39.473571 | orchestrator | | b678ff19-a77c-4553-8ef3-1c5a13cef8bf | test-1 | ACTIVE | 2025-11-08 14:33:39.473582 | orchestrator | | 0527ae6f-4c43-4e2c-8b16-61c45425b95d | test | ACTIVE | 2025-11-08 14:33:39.473593 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:33:39.956068 | orchestrator | + osism manage compute list testbed-node-4 2025-11-08 14:33:43.104072 | orchestrator | +------+--------+----------+ 2025-11-08 14:33:43.104195 | orchestrator | | ID | Name | Status | 2025-11-08 14:33:43.104212 | orchestrator | |------+--------+----------| 2025-11-08 14:33:43.104223 | orchestrator | +------+--------+----------+ 2025-11-08 14:33:43.598284 | orchestrator | + osism manage compute list testbed-node-5 2025-11-08 14:33:46.814859 | orchestrator | +------+--------+----------+ 2025-11-08 14:33:46.815019 | orchestrator | | ID | Name | Status | 2025-11-08 14:33:46.815035 | orchestrator | |------+--------+----------| 2025-11-08 14:33:46.815047 | orchestrator | +------+--------+----------+ 2025-11-08 14:33:47.342281 | orchestrator | + server_ping 2025-11-08 14:33:47.344261 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-08 14:33:47.344304 | orchestrator | ++ tr -d '\r' 2025-11-08 14:33:50.730605 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:33:50.730734 | orchestrator | + ping -c3 192.168.112.123 2025-11-08 14:33:50.738858 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-11-08 14:33:50.738996 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=6.24 ms 2025-11-08 14:33:51.736939 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.64 ms 2025-11-08 14:33:52.739171 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=2.34 ms 2025-11-08 14:33:52.739294 | orchestrator | 2025-11-08 14:33:52.739311 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-11-08 14:33:52.739324 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:33:52.739335 | orchestrator | rtt min/avg/max/mdev = 2.344/3.739/6.235/1.768 ms 2025-11-08 14:33:52.739364 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:33:52.739377 | orchestrator | + ping -c3 192.168.112.151 2025-11-08 14:33:52.749799 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2025-11-08 14:33:52.749847 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=5.93 ms 2025-11-08 14:33:53.748080 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=2.36 ms 2025-11-08 14:33:54.749070 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=2.21 ms 2025-11-08 14:33:54.749177 | orchestrator | 2025-11-08 14:33:54.749193 | orchestrator | --- 192.168.112.151 ping statistics --- 2025-11-08 14:33:54.749207 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-11-08 14:33:54.749219 | orchestrator | rtt min/avg/max/mdev = 2.209/3.499/5.926/1.717 ms 2025-11-08 14:33:54.749506 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:33:54.749536 | orchestrator | + ping -c3 192.168.112.176 2025-11-08 14:33:54.764354 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-11-08 14:33:54.764458 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=9.88 ms 2025-11-08 14:33:55.758431 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=2.44 ms 2025-11-08 14:33:56.760037 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=2.10 ms 2025-11-08 14:33:56.760152 | orchestrator | 2025-11-08 14:33:56.760169 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-11-08 14:33:56.760182 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:33:56.760194 | orchestrator | rtt min/avg/max/mdev = 2.103/4.806/9.878/3.588 ms 2025-11-08 14:33:56.761614 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:33:56.761678 | orchestrator | + ping -c3 192.168.112.127 2025-11-08 14:33:56.774386 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-11-08 14:33:56.774460 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.63 ms 2025-11-08 14:33:57.772057 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.87 ms 2025-11-08 14:33:58.772875 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.53 ms 2025-11-08 14:33:58.773040 | orchestrator | 2025-11-08 14:33:58.773057 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-11-08 14:33:58.773071 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:33:58.773083 | orchestrator | rtt min/avg/max/mdev = 2.530/4.342/7.629/2.327 ms 2025-11-08 14:33:58.773741 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:33:58.773792 | orchestrator | + ping -c3 192.168.112.103 2025-11-08 14:33:58.787296 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-11-08 14:33:58.787383 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=7.44 ms 2025-11-08 14:33:59.784453 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.58 ms 2025-11-08 14:34:00.785818 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=2.04 ms 2025-11-08 14:34:00.785961 | orchestrator | 2025-11-08 14:34:00.785980 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-11-08 14:34:00.785993 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:34:00.786004 | orchestrator | rtt min/avg/max/mdev = 2.042/4.020/7.437/2.425 ms 2025-11-08 14:34:00.786566 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-11-08 14:34:04.407344 | orchestrator | 2025-11-08 14:34:04 | INFO  | Live migrating server 16dc5413-b5f0-4a4a-8e28-5413dd19f227 2025-11-08 14:34:16.043430 | orchestrator | 2025-11-08 14:34:16 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:34:18.425813 | orchestrator | 2025-11-08 14:34:18 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:34:20.739564 | orchestrator | 2025-11-08 14:34:20 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:34:23.113294 | orchestrator | 2025-11-08 14:34:23 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:34:25.624948 | orchestrator | 2025-11-08 14:34:25 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:34:27.900531 | orchestrator | 2025-11-08 14:34:27 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:34:30.303415 | orchestrator | 2025-11-08 14:34:30 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:34:32.727537 | orchestrator | 2025-11-08 14:34:32 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:34:35.222232 | orchestrator | 2025-11-08 14:34:35 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) completed with status ACTIVE 2025-11-08 14:34:35.222326 | orchestrator | 2025-11-08 14:34:35 | INFO  | Live migrating server 4dc46984-f8a3-4c18-82d8-3459bc63dc46 2025-11-08 14:34:46.015530 | orchestrator | 2025-11-08 14:34:46 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:34:48.341471 | orchestrator | 2025-11-08 14:34:48 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:34:50.708453 | orchestrator | 2025-11-08 14:34:50 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:34:53.053416 | orchestrator | 2025-11-08 14:34:53 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:34:55.326574 | orchestrator | 2025-11-08 14:34:55 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:34:57.626777 | orchestrator | 2025-11-08 14:34:57 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:34:59.924205 | orchestrator | 2025-11-08 14:34:59 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:35:02.206124 | orchestrator | 2025-11-08 14:35:02 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:35:04.528077 | orchestrator | 2025-11-08 14:35:04 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:35:06.826500 | orchestrator | 2025-11-08 14:35:06 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) completed with status ACTIVE 2025-11-08 14:35:06.826623 | orchestrator | 2025-11-08 14:35:06 | INFO  | Live migrating server b363afce-f5c0-4b92-9270-47eb9bb2b0fb 2025-11-08 14:35:17.299303 | orchestrator | 2025-11-08 14:35:17 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:35:19.676397 | orchestrator | 2025-11-08 14:35:19 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:35:22.197595 | orchestrator | 2025-11-08 14:35:22 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:35:24.555960 | orchestrator | 2025-11-08 14:35:24 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:35:26.933261 | orchestrator | 2025-11-08 14:35:26 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:35:29.234261 | orchestrator | 2025-11-08 14:35:29 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:35:31.617442 | orchestrator | 2025-11-08 14:35:31 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:35:33.974179 | orchestrator | 2025-11-08 14:35:33 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:35:36.282360 | orchestrator | 2025-11-08 14:35:36 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) completed with status ACTIVE 2025-11-08 14:35:36.282486 | orchestrator | 2025-11-08 14:35:36 | INFO  | Live migrating server b678ff19-a77c-4553-8ef3-1c5a13cef8bf 2025-11-08 14:35:48.645798 | orchestrator | 2025-11-08 14:35:48 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:35:50.994091 | orchestrator | 2025-11-08 14:35:50 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:35:53.385755 | orchestrator | 2025-11-08 14:35:53 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:35:55.630273 | orchestrator | 2025-11-08 14:35:55 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:35:57.931974 | orchestrator | 2025-11-08 14:35:57 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:36:00.252775 | orchestrator | 2025-11-08 14:36:00 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:36:02.590130 | orchestrator | 2025-11-08 14:36:02 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:36:04.872821 | orchestrator | 2025-11-08 14:36:04 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:36:07.245204 | orchestrator | 2025-11-08 14:36:07 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) completed with status ACTIVE 2025-11-08 14:36:07.245298 | orchestrator | 2025-11-08 14:36:07 | INFO  | Live migrating server 0527ae6f-4c43-4e2c-8b16-61c45425b95d 2025-11-08 14:36:17.021410 | orchestrator | 2025-11-08 14:36:17 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:36:19.370628 | orchestrator | 2025-11-08 14:36:19 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:36:21.700252 | orchestrator | 2025-11-08 14:36:21 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:36:24.083038 | orchestrator | 2025-11-08 14:36:24 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:36:26.355089 | orchestrator | 2025-11-08 14:36:26 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:36:28.666809 | orchestrator | 2025-11-08 14:36:28 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:36:31.043750 | orchestrator | 2025-11-08 14:36:31 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:36:33.312452 | orchestrator | 2025-11-08 14:36:33 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:36:35.582563 | orchestrator | 2025-11-08 14:36:35 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:36:37.877240 | orchestrator | 2025-11-08 14:36:37 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:36:40.336830 | orchestrator | 2025-11-08 14:36:40 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) completed with status ACTIVE 2025-11-08 14:36:40.931128 | orchestrator | + compute_list 2025-11-08 14:36:40.931250 | orchestrator | + osism manage compute list testbed-node-3 2025-11-08 14:36:44.166796 | orchestrator | +------+--------+----------+ 2025-11-08 14:36:44.166987 | orchestrator | | ID | Name | Status | 2025-11-08 14:36:44.167986 | orchestrator | |------+--------+----------| 2025-11-08 14:36:44.168072 | orchestrator | +------+--------+----------+ 2025-11-08 14:36:44.669818 | orchestrator | + osism manage compute list testbed-node-4 2025-11-08 14:36:48.515582 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:36:48.515710 | orchestrator | | ID | Name | Status | 2025-11-08 14:36:48.515726 | orchestrator | |--------------------------------------+--------+----------| 2025-11-08 14:36:48.515738 | orchestrator | | 16dc5413-b5f0-4a4a-8e28-5413dd19f227 | test-4 | ACTIVE | 2025-11-08 14:36:48.515750 | orchestrator | | 4dc46984-f8a3-4c18-82d8-3459bc63dc46 | test-3 | ACTIVE | 2025-11-08 14:36:48.515761 | orchestrator | | b363afce-f5c0-4b92-9270-47eb9bb2b0fb | test-2 | ACTIVE | 2025-11-08 14:36:48.515772 | orchestrator | | b678ff19-a77c-4553-8ef3-1c5a13cef8bf | test-1 | ACTIVE | 2025-11-08 14:36:48.515783 | orchestrator | | 0527ae6f-4c43-4e2c-8b16-61c45425b95d | test | ACTIVE | 2025-11-08 14:36:48.515796 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:36:49.010154 | orchestrator | + osism manage compute list testbed-node-5 2025-11-08 14:36:52.425777 | orchestrator | +------+--------+----------+ 2025-11-08 14:36:52.425871 | orchestrator | | ID | Name | Status | 2025-11-08 14:36:52.425923 | orchestrator | |------+--------+----------| 2025-11-08 14:36:52.425933 | orchestrator | +------+--------+----------+ 2025-11-08 14:36:52.954345 | orchestrator | + server_ping 2025-11-08 14:36:52.955460 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-08 14:36:52.955545 | orchestrator | ++ tr -d '\r' 2025-11-08 14:36:56.386080 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:36:56.386187 | orchestrator | + ping -c3 192.168.112.123 2025-11-08 14:36:56.399345 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-11-08 14:36:56.399437 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=10.6 ms 2025-11-08 14:36:57.392537 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.78 ms 2025-11-08 14:36:58.392847 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=2.41 ms 2025-11-08 14:36:58.393011 | orchestrator | 2025-11-08 14:36:58.393034 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-11-08 14:36:58.393056 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2025-11-08 14:36:58.393112 | orchestrator | rtt min/avg/max/mdev = 2.412/5.269/10.612/3.781 ms 2025-11-08 14:36:58.393690 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:36:58.393715 | orchestrator | + ping -c3 192.168.112.151 2025-11-08 14:36:58.407383 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2025-11-08 14:36:58.407454 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=7.87 ms 2025-11-08 14:36:59.404147 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=2.82 ms 2025-11-08 14:37:00.404061 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=1.50 ms 2025-11-08 14:37:00.404159 | orchestrator | 2025-11-08 14:37:00.404175 | orchestrator | --- 192.168.112.151 ping statistics --- 2025-11-08 14:37:00.404186 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:37:00.404197 | orchestrator | rtt min/avg/max/mdev = 1.502/4.064/7.869/2.743 ms 2025-11-08 14:37:00.404208 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:37:00.404218 | orchestrator | + ping -c3 192.168.112.176 2025-11-08 14:37:00.416800 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-11-08 14:37:00.416921 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=8.69 ms 2025-11-08 14:37:01.413239 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=2.60 ms 2025-11-08 14:37:02.414312 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=1.91 ms 2025-11-08 14:37:02.414441 | orchestrator | 2025-11-08 14:37:02.414474 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-11-08 14:37:02.414487 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-08 14:37:02.414497 | orchestrator | rtt min/avg/max/mdev = 1.913/4.402/8.691/3.045 ms 2025-11-08 14:37:02.415599 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:37:02.415645 | orchestrator | + ping -c3 192.168.112.127 2025-11-08 14:37:02.426942 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-11-08 14:37:02.427008 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=6.48 ms 2025-11-08 14:37:03.424961 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.39 ms 2025-11-08 14:37:04.427190 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.16 ms 2025-11-08 14:37:04.427323 | orchestrator | 2025-11-08 14:37:04.427344 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-11-08 14:37:04.427357 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-08 14:37:04.427369 | orchestrator | rtt min/avg/max/mdev = 2.158/3.675/6.475/1.981 ms 2025-11-08 14:37:04.427820 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:37:04.427943 | orchestrator | + ping -c3 192.168.112.103 2025-11-08 14:37:04.439233 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-11-08 14:37:04.439311 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=7.70 ms 2025-11-08 14:37:05.434950 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.14 ms 2025-11-08 14:37:06.437037 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.93 ms 2025-11-08 14:37:06.437133 | orchestrator | 2025-11-08 14:37:06.437144 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-11-08 14:37:06.437154 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:37:06.437161 | orchestrator | rtt min/avg/max/mdev = 1.929/3.922/7.702/2.673 ms 2025-11-08 14:37:06.437459 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-11-08 14:37:09.967513 | orchestrator | 2025-11-08 14:37:09 | INFO  | Live migrating server 16dc5413-b5f0-4a4a-8e28-5413dd19f227 2025-11-08 14:37:19.794315 | orchestrator | 2025-11-08 14:37:19 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:37:22.131654 | orchestrator | 2025-11-08 14:37:22 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:37:24.476016 | orchestrator | 2025-11-08 14:37:24 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:37:26.835533 | orchestrator | 2025-11-08 14:37:26 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:37:29.142284 | orchestrator | 2025-11-08 14:37:29 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:37:31.423122 | orchestrator | 2025-11-08 14:37:31 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:37:33.720584 | orchestrator | 2025-11-08 14:37:33 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:37:36.071116 | orchestrator | 2025-11-08 14:37:36 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) is still in progress 2025-11-08 14:37:38.380389 | orchestrator | 2025-11-08 14:37:38 | INFO  | Live migration of 16dc5413-b5f0-4a4a-8e28-5413dd19f227 (test-4) completed with status ACTIVE 2025-11-08 14:37:38.380542 | orchestrator | 2025-11-08 14:37:38 | INFO  | Live migrating server 4dc46984-f8a3-4c18-82d8-3459bc63dc46 2025-11-08 14:37:48.894600 | orchestrator | 2025-11-08 14:37:48 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:37:51.236137 | orchestrator | 2025-11-08 14:37:51 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:37:53.632060 | orchestrator | 2025-11-08 14:37:53 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:37:55.879837 | orchestrator | 2025-11-08 14:37:55 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:37:58.179137 | orchestrator | 2025-11-08 14:37:58 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:38:00.469769 | orchestrator | 2025-11-08 14:38:00 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:38:02.729084 | orchestrator | 2025-11-08 14:38:02 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:38:05.010498 | orchestrator | 2025-11-08 14:38:05 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) is still in progress 2025-11-08 14:38:07.379712 | orchestrator | 2025-11-08 14:38:07 | INFO  | Live migration of 4dc46984-f8a3-4c18-82d8-3459bc63dc46 (test-3) completed with status ACTIVE 2025-11-08 14:38:07.379828 | orchestrator | 2025-11-08 14:38:07 | INFO  | Live migrating server b363afce-f5c0-4b92-9270-47eb9bb2b0fb 2025-11-08 14:38:17.144516 | orchestrator | 2025-11-08 14:38:17 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:38:19.486353 | orchestrator | 2025-11-08 14:38:19 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:38:21.863154 | orchestrator | 2025-11-08 14:38:21 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:38:24.198920 | orchestrator | 2025-11-08 14:38:24 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:38:26.484170 | orchestrator | 2025-11-08 14:38:26 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:38:28.746980 | orchestrator | 2025-11-08 14:38:28 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:38:31.051482 | orchestrator | 2025-11-08 14:38:31 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:38:33.374389 | orchestrator | 2025-11-08 14:38:33 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:38:35.681778 | orchestrator | 2025-11-08 14:38:35 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) is still in progress 2025-11-08 14:38:38.059457 | orchestrator | 2025-11-08 14:38:38 | INFO  | Live migration of b363afce-f5c0-4b92-9270-47eb9bb2b0fb (test-2) completed with status ACTIVE 2025-11-08 14:38:38.059555 | orchestrator | 2025-11-08 14:38:38 | INFO  | Live migrating server b678ff19-a77c-4553-8ef3-1c5a13cef8bf 2025-11-08 14:38:48.515147 | orchestrator | 2025-11-08 14:38:48 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:38:50.863358 | orchestrator | 2025-11-08 14:38:50 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:38:53.201082 | orchestrator | 2025-11-08 14:38:53 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:38:55.517640 | orchestrator | 2025-11-08 14:38:55 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:38:57.802092 | orchestrator | 2025-11-08 14:38:57 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:39:00.089790 | orchestrator | 2025-11-08 14:39:00 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:39:02.379209 | orchestrator | 2025-11-08 14:39:02 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:39:04.734484 | orchestrator | 2025-11-08 14:39:04 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) is still in progress 2025-11-08 14:39:07.070735 | orchestrator | 2025-11-08 14:39:07 | INFO  | Live migration of b678ff19-a77c-4553-8ef3-1c5a13cef8bf (test-1) completed with status ACTIVE 2025-11-08 14:39:07.070840 | orchestrator | 2025-11-08 14:39:07 | INFO  | Live migrating server 0527ae6f-4c43-4e2c-8b16-61c45425b95d 2025-11-08 14:39:17.301103 | orchestrator | 2025-11-08 14:39:17 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:39:19.688988 | orchestrator | 2025-11-08 14:39:19 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:39:22.097973 | orchestrator | 2025-11-08 14:39:22 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:39:24.484091 | orchestrator | 2025-11-08 14:39:24 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:39:26.795665 | orchestrator | 2025-11-08 14:39:26 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:39:29.224483 | orchestrator | 2025-11-08 14:39:29 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:39:31.473292 | orchestrator | 2025-11-08 14:39:31 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:39:33.798914 | orchestrator | 2025-11-08 14:39:33 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:39:36.106656 | orchestrator | 2025-11-08 14:39:36 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:39:38.383626 | orchestrator | 2025-11-08 14:39:38 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) is still in progress 2025-11-08 14:39:40.703962 | orchestrator | 2025-11-08 14:39:40 | INFO  | Live migration of 0527ae6f-4c43-4e2c-8b16-61c45425b95d (test) completed with status ACTIVE 2025-11-08 14:39:41.235529 | orchestrator | + compute_list 2025-11-08 14:39:41.235657 | orchestrator | + osism manage compute list testbed-node-3 2025-11-08 14:39:44.536654 | orchestrator | +------+--------+----------+ 2025-11-08 14:39:44.536748 | orchestrator | | ID | Name | Status | 2025-11-08 14:39:44.536758 | orchestrator | |------+--------+----------| 2025-11-08 14:39:44.536766 | orchestrator | +------+--------+----------+ 2025-11-08 14:39:45.110947 | orchestrator | + osism manage compute list testbed-node-4 2025-11-08 14:39:48.396627 | orchestrator | +------+--------+----------+ 2025-11-08 14:39:48.396755 | orchestrator | | ID | Name | Status | 2025-11-08 14:39:48.396772 | orchestrator | |------+--------+----------| 2025-11-08 14:39:48.396784 | orchestrator | +------+--------+----------+ 2025-11-08 14:39:48.899009 | orchestrator | + osism manage compute list testbed-node-5 2025-11-08 14:39:52.545378 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:39:52.545458 | orchestrator | | ID | Name | Status | 2025-11-08 14:39:52.545465 | orchestrator | |--------------------------------------+--------+----------| 2025-11-08 14:39:52.545470 | orchestrator | | 16dc5413-b5f0-4a4a-8e28-5413dd19f227 | test-4 | ACTIVE | 2025-11-08 14:39:52.545474 | orchestrator | | 4dc46984-f8a3-4c18-82d8-3459bc63dc46 | test-3 | ACTIVE | 2025-11-08 14:39:52.545478 | orchestrator | | b363afce-f5c0-4b92-9270-47eb9bb2b0fb | test-2 | ACTIVE | 2025-11-08 14:39:52.545482 | orchestrator | | b678ff19-a77c-4553-8ef3-1c5a13cef8bf | test-1 | ACTIVE | 2025-11-08 14:39:52.545486 | orchestrator | | 0527ae6f-4c43-4e2c-8b16-61c45425b95d | test | ACTIVE | 2025-11-08 14:39:52.545490 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-08 14:39:53.133256 | orchestrator | + server_ping 2025-11-08 14:39:53.134645 | orchestrator | ++ tr -d '\r' 2025-11-08 14:39:53.134691 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-08 14:39:56.573150 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:39:56.573264 | orchestrator | + ping -c3 192.168.112.123 2025-11-08 14:39:56.584420 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-11-08 14:39:56.584504 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=6.93 ms 2025-11-08 14:39:57.581826 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.78 ms 2025-11-08 14:39:58.584194 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=2.23 ms 2025-11-08 14:39:58.584330 | orchestrator | 2025-11-08 14:39:58.584425 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-11-08 14:39:58.584449 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:39:58.584467 | orchestrator | rtt min/avg/max/mdev = 2.229/3.980/6.932/2.099 ms 2025-11-08 14:39:58.584611 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:39:58.584641 | orchestrator | + ping -c3 192.168.112.151 2025-11-08 14:39:58.598945 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2025-11-08 14:39:58.599031 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=8.93 ms 2025-11-08 14:39:59.594788 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=2.98 ms 2025-11-08 14:40:00.594152 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=2.11 ms 2025-11-08 14:40:00.594240 | orchestrator | 2025-11-08 14:40:00.594251 | orchestrator | --- 192.168.112.151 ping statistics --- 2025-11-08 14:40:00.594260 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-11-08 14:40:00.594267 | orchestrator | rtt min/avg/max/mdev = 2.105/4.669/8.926/3.030 ms 2025-11-08 14:40:00.594553 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:40:00.594573 | orchestrator | + ping -c3 192.168.112.176 2025-11-08 14:40:00.606428 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-11-08 14:40:00.606544 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=6.21 ms 2025-11-08 14:40:01.604414 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=2.67 ms 2025-11-08 14:40:02.607089 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=2.40 ms 2025-11-08 14:40:02.607262 | orchestrator | 2025-11-08 14:40:02.607284 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-11-08 14:40:02.607298 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-08 14:40:02.607310 | orchestrator | rtt min/avg/max/mdev = 2.396/3.758/6.213/1.739 ms 2025-11-08 14:40:02.607412 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:40:02.607429 | orchestrator | + ping -c3 192.168.112.127 2025-11-08 14:40:02.620634 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-11-08 14:40:02.620741 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.08 ms 2025-11-08 14:40:03.616867 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.62 ms 2025-11-08 14:40:04.618996 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.13 ms 2025-11-08 14:40:04.619092 | orchestrator | 2025-11-08 14:40:04.619104 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-11-08 14:40:04.619113 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-08 14:40:04.619121 | orchestrator | rtt min/avg/max/mdev = 2.127/3.943/7.081/2.228 ms 2025-11-08 14:40:04.619155 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-08 14:40:04.619203 | orchestrator | + ping -c3 192.168.112.103 2025-11-08 14:40:04.629728 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-11-08 14:40:04.629823 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=6.83 ms 2025-11-08 14:40:05.625397 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.21 ms 2025-11-08 14:40:06.626817 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.98 ms 2025-11-08 14:40:06.626949 | orchestrator | 2025-11-08 14:40:06.626962 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-11-08 14:40:06.626971 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-11-08 14:40:06.626979 | orchestrator | rtt min/avg/max/mdev = 1.981/3.672/6.829/2.233 ms 2025-11-08 14:40:06.729407 | orchestrator | ok: Runtime: 0:22:31.705113 2025-11-08 14:40:06.767959 | 2025-11-08 14:40:06.768096 | TASK [Run tempest] 2025-11-08 14:40:07.303498 | orchestrator | skipping: Conditional result was False 2025-11-08 14:40:07.323577 | 2025-11-08 14:40:07.323799 | TASK [Check prometheus alert status] 2025-11-08 14:40:07.872436 | orchestrator | skipping: Conditional result was False 2025-11-08 14:40:07.875644 | 2025-11-08 14:40:07.875824 | PLAY RECAP 2025-11-08 14:40:07.875979 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-11-08 14:40:07.876055 | 2025-11-08 14:40:08.117371 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-11-08 14:40:08.120181 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-11-08 14:40:09.568407 | 2025-11-08 14:40:09.568660 | PLAY [Post output play] 2025-11-08 14:40:09.596583 | 2025-11-08 14:40:09.596779 | LOOP [stage-output : Register sources] 2025-11-08 14:40:09.654691 | 2025-11-08 14:40:09.654990 | TASK [stage-output : Check sudo] 2025-11-08 14:40:11.043912 | orchestrator | sudo: a password is required 2025-11-08 14:40:11.197962 | orchestrator | ok: Runtime: 0:00:00.495572 2025-11-08 14:40:11.211688 | 2025-11-08 14:40:11.211830 | LOOP [stage-output : Set source and destination for files and folders] 2025-11-08 14:40:11.250938 | 2025-11-08 14:40:11.251211 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-11-08 14:40:11.320825 | orchestrator | ok 2025-11-08 14:40:11.330933 | 2025-11-08 14:40:11.331067 | LOOP [stage-output : Ensure target folders exist] 2025-11-08 14:40:11.792980 | orchestrator | ok: "docs" 2025-11-08 14:40:11.793412 | 2025-11-08 14:40:12.070778 | orchestrator | ok: "artifacts" 2025-11-08 14:40:12.321827 | orchestrator | ok: "logs" 2025-11-08 14:40:12.338581 | 2025-11-08 14:40:12.338759 | LOOP [stage-output : Copy files and folders to staging folder] 2025-11-08 14:40:12.371655 | 2025-11-08 14:40:12.371880 | TASK [stage-output : Make all log files readable] 2025-11-08 14:40:12.694155 | orchestrator | ok 2025-11-08 14:40:12.703805 | 2025-11-08 14:40:12.703951 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-11-08 14:40:12.739252 | orchestrator | skipping: Conditional result was False 2025-11-08 14:40:12.755534 | 2025-11-08 14:40:12.755747 | TASK [stage-output : Discover log files for compression] 2025-11-08 14:40:12.779899 | orchestrator | skipping: Conditional result was False 2025-11-08 14:40:12.795580 | 2025-11-08 14:40:12.795948 | LOOP [stage-output : Archive everything from logs] 2025-11-08 14:40:12.848649 | 2025-11-08 14:40:12.848826 | PLAY [Post cleanup play] 2025-11-08 14:40:12.858667 | 2025-11-08 14:40:12.858780 | TASK [Set cloud fact (Zuul deployment)] 2025-11-08 14:40:12.925374 | orchestrator | ok 2025-11-08 14:40:12.937368 | 2025-11-08 14:40:12.937481 | TASK [Set cloud fact (local deployment)] 2025-11-08 14:40:12.974129 | orchestrator | skipping: Conditional result was False 2025-11-08 14:40:12.993302 | 2025-11-08 14:40:12.993462 | TASK [Clean the cloud environment] 2025-11-08 14:40:14.124995 | orchestrator | 2025-11-08 14:40:14 - clean up servers 2025-11-08 14:40:14.972535 | orchestrator | 2025-11-08 14:40:14 - testbed-manager 2025-11-08 14:40:15.063453 | orchestrator | 2025-11-08 14:40:15 - testbed-node-4 2025-11-08 14:40:15.151721 | orchestrator | 2025-11-08 14:40:15 - testbed-node-5 2025-11-08 14:40:15.237894 | orchestrator | 2025-11-08 14:40:15 - testbed-node-3 2025-11-08 14:40:15.342897 | orchestrator | 2025-11-08 14:40:15 - testbed-node-1 2025-11-08 14:40:15.431664 | orchestrator | 2025-11-08 14:40:15 - testbed-node-0 2025-11-08 14:40:15.529714 | orchestrator | 2025-11-08 14:40:15 - testbed-node-2 2025-11-08 14:40:15.626085 | orchestrator | 2025-11-08 14:40:15 - clean up keypairs 2025-11-08 14:40:15.651391 | orchestrator | 2025-11-08 14:40:15 - testbed 2025-11-08 14:40:15.678875 | orchestrator | 2025-11-08 14:40:15 - wait for servers to be gone 2025-11-08 14:40:26.498993 | orchestrator | 2025-11-08 14:40:26 - clean up ports 2025-11-08 14:40:26.700383 | orchestrator | 2025-11-08 14:40:26 - 01c3e288-2a01-4f16-a2d7-a99438c35796 2025-11-08 14:40:27.023580 | orchestrator | 2025-11-08 14:40:27 - 164d56c5-9f32-4e36-b7e2-ef2c4962aa52 2025-11-08 14:40:27.323436 | orchestrator | 2025-11-08 14:40:27 - 2021adbc-2710-4624-a423-36b5a30380f6 2025-11-08 14:40:27.540817 | orchestrator | 2025-11-08 14:40:27 - 3d10797f-2aa3-4f5b-88f9-d1c33418a853 2025-11-08 14:40:27.937063 | orchestrator | 2025-11-08 14:40:27 - 5316225b-76ec-414d-899f-fa79ff9cf5d9 2025-11-08 14:40:28.165636 | orchestrator | 2025-11-08 14:40:28 - 996f100f-3857-4cfe-8204-922aeaf8a95b 2025-11-08 14:40:28.383034 | orchestrator | 2025-11-08 14:40:28 - a2bfe24a-3eb8-41e4-bc30-30d86fc3ec26 2025-11-08 14:40:28.588439 | orchestrator | 2025-11-08 14:40:28 - clean up volumes 2025-11-08 14:40:28.706188 | orchestrator | 2025-11-08 14:40:28 - testbed-volume-1-node-base 2025-11-08 14:40:28.746380 | orchestrator | 2025-11-08 14:40:28 - testbed-volume-4-node-base 2025-11-08 14:40:28.788739 | orchestrator | 2025-11-08 14:40:28 - testbed-volume-2-node-base 2025-11-08 14:40:28.831898 | orchestrator | 2025-11-08 14:40:28 - testbed-volume-0-node-base 2025-11-08 14:40:28.874159 | orchestrator | 2025-11-08 14:40:28 - testbed-volume-3-node-base 2025-11-08 14:40:28.927253 | orchestrator | 2025-11-08 14:40:28 - testbed-volume-5-node-base 2025-11-08 14:40:28.968214 | orchestrator | 2025-11-08 14:40:28 - testbed-volume-manager-base 2025-11-08 14:40:29.011641 | orchestrator | 2025-11-08 14:40:29 - testbed-volume-2-node-5 2025-11-08 14:40:29.051933 | orchestrator | 2025-11-08 14:40:29 - testbed-volume-0-node-3 2025-11-08 14:40:29.093995 | orchestrator | 2025-11-08 14:40:29 - testbed-volume-7-node-4 2025-11-08 14:40:29.134944 | orchestrator | 2025-11-08 14:40:29 - testbed-volume-6-node-3 2025-11-08 14:40:29.183398 | orchestrator | 2025-11-08 14:40:29 - testbed-volume-8-node-5 2025-11-08 14:40:29.225333 | orchestrator | 2025-11-08 14:40:29 - testbed-volume-1-node-4 2025-11-08 14:40:29.271371 | orchestrator | 2025-11-08 14:40:29 - testbed-volume-4-node-4 2025-11-08 14:40:29.315054 | orchestrator | 2025-11-08 14:40:29 - testbed-volume-5-node-5 2025-11-08 14:40:29.355949 | orchestrator | 2025-11-08 14:40:29 - testbed-volume-3-node-3 2025-11-08 14:40:29.404168 | orchestrator | 2025-11-08 14:40:29 - disconnect routers 2025-11-08 14:40:29.520271 | orchestrator | 2025-11-08 14:40:29 - testbed 2025-11-08 14:40:30.926180 | orchestrator | 2025-11-08 14:40:30 - clean up subnets 2025-11-08 14:40:30.977671 | orchestrator | 2025-11-08 14:40:30 - subnet-testbed-management 2025-11-08 14:40:31.200174 | orchestrator | 2025-11-08 14:40:31 - clean up networks 2025-11-08 14:40:31.330445 | orchestrator | 2025-11-08 14:40:31 - net-testbed-management 2025-11-08 14:40:31.623483 | orchestrator | 2025-11-08 14:40:31 - clean up security groups 2025-11-08 14:40:31.665407 | orchestrator | 2025-11-08 14:40:31 - testbed-node 2025-11-08 14:40:31.773452 | orchestrator | 2025-11-08 14:40:31 - testbed-management 2025-11-08 14:40:31.892104 | orchestrator | 2025-11-08 14:40:31 - clean up floating ips 2025-11-08 14:40:31.924565 | orchestrator | 2025-11-08 14:40:31 - 81.163.192.186 2025-11-08 14:40:32.276236 | orchestrator | 2025-11-08 14:40:32 - clean up routers 2025-11-08 14:40:32.383737 | orchestrator | 2025-11-08 14:40:32 - testbed 2025-11-08 14:40:33.561549 | orchestrator | ok: Runtime: 0:00:19.893771 2025-11-08 14:40:33.566241 | 2025-11-08 14:40:33.566421 | PLAY RECAP 2025-11-08 14:40:33.566554 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-11-08 14:40:33.566645 | 2025-11-08 14:40:33.739806 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-11-08 14:40:33.740993 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-11-08 14:40:34.480547 | 2025-11-08 14:40:34.480737 | PLAY [Cleanup play] 2025-11-08 14:40:34.496264 | 2025-11-08 14:40:34.496378 | TASK [Set cloud fact (Zuul deployment)] 2025-11-08 14:40:34.536504 | orchestrator | ok 2025-11-08 14:40:34.543148 | 2025-11-08 14:40:34.543267 | TASK [Set cloud fact (local deployment)] 2025-11-08 14:40:34.577031 | orchestrator | skipping: Conditional result was False 2025-11-08 14:40:34.587064 | 2025-11-08 14:40:34.587176 | TASK [Clean the cloud environment] 2025-11-08 14:40:35.749116 | orchestrator | 2025-11-08 14:40:35 - clean up servers 2025-11-08 14:40:36.280186 | orchestrator | 2025-11-08 14:40:36 - clean up keypairs 2025-11-08 14:40:36.301495 | orchestrator | 2025-11-08 14:40:36 - wait for servers to be gone 2025-11-08 14:40:36.345780 | orchestrator | 2025-11-08 14:40:36 - clean up ports 2025-11-08 14:40:36.433953 | orchestrator | 2025-11-08 14:40:36 - clean up volumes 2025-11-08 14:40:36.498663 | orchestrator | 2025-11-08 14:40:36 - disconnect routers 2025-11-08 14:40:36.526169 | orchestrator | 2025-11-08 14:40:36 - clean up subnets 2025-11-08 14:40:36.546484 | orchestrator | 2025-11-08 14:40:36 - clean up networks 2025-11-08 14:40:36.669475 | orchestrator | 2025-11-08 14:40:36 - clean up security groups 2025-11-08 14:40:36.706907 | orchestrator | 2025-11-08 14:40:36 - clean up floating ips 2025-11-08 14:40:36.731366 | orchestrator | 2025-11-08 14:40:36 - clean up routers 2025-11-08 14:40:37.125710 | orchestrator | ok: Runtime: 0:00:01.409642 2025-11-08 14:40:37.129711 | 2025-11-08 14:40:37.129878 | PLAY RECAP 2025-11-08 14:40:37.129997 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-11-08 14:40:37.130058 | 2025-11-08 14:40:37.249441 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-11-08 14:40:37.251801 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-11-08 14:40:37.967729 | 2025-11-08 14:40:37.967878 | PLAY [Base post-fetch] 2025-11-08 14:40:37.982992 | 2025-11-08 14:40:37.983115 | TASK [fetch-output : Set log path for multiple nodes] 2025-11-08 14:40:38.038892 | orchestrator | skipping: Conditional result was False 2025-11-08 14:40:38.054976 | 2025-11-08 14:40:38.055181 | TASK [fetch-output : Set log path for single node] 2025-11-08 14:40:38.105794 | orchestrator | ok 2025-11-08 14:40:38.114563 | 2025-11-08 14:40:38.114722 | LOOP [fetch-output : Ensure local output dirs] 2025-11-08 14:40:38.620542 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/0d6f5d21fae74aeb8ef4d65207790d8f/work/logs" 2025-11-08 14:40:38.899877 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0d6f5d21fae74aeb8ef4d65207790d8f/work/artifacts" 2025-11-08 14:40:39.172274 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0d6f5d21fae74aeb8ef4d65207790d8f/work/docs" 2025-11-08 14:40:39.195828 | 2025-11-08 14:40:39.196011 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-11-08 14:40:40.170774 | orchestrator | changed: .d..t...... ./ 2025-11-08 14:40:40.171331 | orchestrator | changed: All items complete 2025-11-08 14:40:40.171389 | 2025-11-08 14:40:40.871545 | orchestrator | changed: .d..t...... ./ 2025-11-08 14:40:41.632320 | orchestrator | changed: .d..t...... ./ 2025-11-08 14:40:41.658072 | 2025-11-08 14:40:41.658210 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-11-08 14:40:41.685844 | orchestrator | skipping: Conditional result was False 2025-11-08 14:40:41.688514 | orchestrator | skipping: Conditional result was False 2025-11-08 14:40:41.713317 | 2025-11-08 14:40:41.713431 | PLAY RECAP 2025-11-08 14:40:41.713514 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-11-08 14:40:41.713557 | 2025-11-08 14:40:41.860169 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-11-08 14:40:41.862238 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-11-08 14:40:42.576901 | 2025-11-08 14:40:42.577053 | PLAY [Base post] 2025-11-08 14:40:42.591398 | 2025-11-08 14:40:42.591520 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-11-08 14:40:43.863743 | orchestrator | changed 2025-11-08 14:40:43.875641 | 2025-11-08 14:40:43.875839 | PLAY RECAP 2025-11-08 14:40:43.875946 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-11-08 14:40:43.876048 | 2025-11-08 14:40:44.029226 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-11-08 14:40:44.030420 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-11-08 14:40:44.817653 | 2025-11-08 14:40:44.817817 | PLAY [Base post-logs] 2025-11-08 14:40:44.828662 | 2025-11-08 14:40:44.828794 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-11-08 14:40:45.355537 | localhost | changed 2025-11-08 14:40:45.387537 | 2025-11-08 14:40:45.387778 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-11-08 14:40:45.428636 | localhost | ok 2025-11-08 14:40:45.432186 | 2025-11-08 14:40:45.432303 | TASK [Set zuul-log-path fact] 2025-11-08 14:40:45.458454 | localhost | ok 2025-11-08 14:40:45.468916 | 2025-11-08 14:40:45.469029 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-11-08 14:40:45.507202 | localhost | ok 2025-11-08 14:40:45.513171 | 2025-11-08 14:40:45.513324 | TASK [upload-logs : Create log directories] 2025-11-08 14:40:46.024306 | localhost | changed 2025-11-08 14:40:46.029446 | 2025-11-08 14:40:46.029695 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-11-08 14:40:46.542490 | localhost -> localhost | ok: Runtime: 0:00:00.005932 2025-11-08 14:40:46.547499 | 2025-11-08 14:40:46.547678 | TASK [upload-logs : Upload logs to log server] 2025-11-08 14:40:47.102566 | localhost | Output suppressed because no_log was given 2025-11-08 14:40:47.105363 | 2025-11-08 14:40:47.105515 | LOOP [upload-logs : Compress console log and json output] 2025-11-08 14:40:47.168183 | localhost | skipping: Conditional result was False 2025-11-08 14:40:47.173450 | localhost | skipping: Conditional result was False 2025-11-08 14:40:47.186927 | 2025-11-08 14:40:47.187171 | LOOP [upload-logs : Upload compressed console log and json output] 2025-11-08 14:40:47.241354 | localhost | skipping: Conditional result was False 2025-11-08 14:40:47.241988 | 2025-11-08 14:40:47.245564 | localhost | skipping: Conditional result was False 2025-11-08 14:40:47.259192 | 2025-11-08 14:40:47.259414 | LOOP [upload-logs : Upload console log and json output]